Spring Cloud Stream connect to multiple hosts for single binder (RabbitMQ) - spring

we are using Spring Cloud Stream to listen to rabbitMQ multiple queues, especially the SCF model
The spring-cloud-stream-reactive module is deprecated in favor of native support via Spring Cloud Function programming model.
by the time there was a single node/host it was working good (application.yml snippet shared below),
however the moment we try to connect multiple nodes it is failing,
Can someone guide how to connect the same
or have some sample related to Spring Cloud Documentation
Following Code is working as expected
spring:
cloud:
stream:
function:
definition: function1;function2;function3
bindings:
function1-in-0:
group: allocation
destination: destinationExchange
binder: rabbit
function2-in-0:
group: allocation
destination: destinationExchange
binder: rabbit
function3-in-0:
group: allocation
destination: destinationExchange
binder: rabbit
rabbit:
bindings:
function1-in-0:
consumer:
bindingRoutingKey: routing.key.1
function2-in-0:
consumer:
bindingRoutingKey: routing.key.2
function3-in-0:
consumer:
bindingRoutingKey: routing.key.3
binder:
nodes: address1
Basically it need to be something like following
spring:
cloud:
stream:
function:
definition: function1;function2;function3
bindings:
function1-in-0:
group: allocation
destination: destinationExchange
binder: rabbit1
function2-in-0:
group: allocation
destination: destinationExchange
binder: rabbit2
function3-in-0:
group: allocation
destination: destinationExchange
binder: rabbit3
binder:
rabbit1:
function1-in-0:
consumer:
bindingRoutingKey: routing.key.1
binder:
nodes: address1
rabbit2:
function2-in-0:
consumer:
bindingRoutingKey: routing.key.2
binder:
nodes: address2
rabbit3:
function3-in-0:
consumer:
bindingRoutingKey: routing.key.3
binder:
nodes: address3
with following addition itself
binders:
rabbit1:
type: rabbit
environment:
spring.spring.cloud.stream.kafka:
binder:
nodes: localhost
i am getting this error
o.s.boot.SpringApplication : Application run failed
org.springframework.context.ApplicationContextException: Failed to start bean 'inputBindingLifecycle'; nested exception is java.lang.IllegalStateException: Unknown binder configuration: rabbit
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:181) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:54) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356) ~[spring-context-5.3.8.jar:5.3.8]
at java.base/java.lang.Iterable.forEach(Iterable.java:75) ~[na:na]
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:155) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:123) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:935) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:586) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145) ~[spring-boot-2.5.2.jar:2.5.2]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) ~[spring-boot-2.5.2.jar:2.5.2]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:434) ~[spring-boot-2.5.2.jar:2.5.2]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:338) ~[spring-boot-2.5.2.jar:2.5.2]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1343) ~[spring-boot-2.5.2.jar:2.5.2]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1332) ~[spring-boot-2.5.2.jar:2.5.2]
at com.gap.pem.Application.main(Application.java:14) ~[main/:na]
Caused by: java.lang.IllegalStateException: Unknown binder configuration: rabbit
at org.springframework.util.Assert.state(Assert.java:76) ~[spring-core-5.3.8.jar:5.3.8]
at org.springframework.cloud.stream.binder.DefaultBinderFactory.getBinderInstance(DefaultBinderFactory.java:255) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binder.DefaultBinderFactory.doGetBinder(DefaultBinderFactory.java:224) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binder.DefaultBinderFactory.getBinder(DefaultBinderFactory.java:152) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binding.BindingService.getBinder(BindingService.java:386) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binding.BindingService.bindConsumer(BindingService.java:103) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binding.AbstractBindableProxyFactory.createAndBindInputs(AbstractBindableProxyFactory.java:118) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binding.InputBindingLifecycle.doStartWithBindable(InputBindingLifecycle.java:58) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at java.base/java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608) ~[na:na]
at org.springframework.cloud.stream.binding.AbstractBindingLifecycle.start(AbstractBindingLifecycle.java:57) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binding.InputBindingLifecycle.start(InputBindingLifecycle.java:34) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178) ~[spring-context-5.3.8.jar:5.3.8]
... 14 common frames omitted
Process finished with exit code 1
we have following dependencies available
implementation 'org.springframework.cloud:spring-cloud-stream'
implementation 'org.springframework.cloud:spring-cloud-stream-binder-kafka'
implementation 'org.springframework.cloud:spring-cloud-stream-binder-rabbit'

I struggled a lot getting this to run. Especially with routing keys. In the end my solution was a bitt different than shown here. In hopes that it will help someone in the future :
spring:
cloud:
stream:
binders:
rabbit1:
type: rabbit
environment:
spring:
rabbitmq:
addresses: some.server.com:5672,some.other.server.com:5672
username: someUserName
password: someUserPassword
rabbit2:
type: rabbit
environment:
spring:
rabbitmq:
addresses: some.rabbit2.server.us:5672,someother.rabbit2.server.de:5672
username: secondRabbitUserName
password: secondRabbitPassword
function:
definition: someReceiver;anotherRec
rabbit:
bindings:
someReceiver-in-0:
consumer:
auto-bind-dlq: true
republishToDlq: true
bindingRoutingKey: some.routing.key.#
anotherRec-in-0:
consumer:
bindingRoutingKey: finished.#
firstAction-out-0:
producer:
routingKeyExpression: "'switch.'+headers.someValue+'.'+headers.someOtherValue"
bindingRoutingKey: switch.#
userNotification-out-0:
producer:
routingKeyExpression: "'switch.someKeyExpression'"
bindingRoutingKey: switch.#
bindings:
anotherRec-in-0:
destination: reciving.exchange.name
group: some-queue-name-v7
binder: rabbit1
firstAction-out-0:
destination: some.exhange.name.1
binder: rabbit1
someReceiver-in-0:
destination: another.exchange.name.1
group: queueName
binder: rabbit2
userNotification-out-0:
destination: the.exchange.name
binder: rabbit2

Upon adding the binders config for both rabbit1 and rabbit2 it resolved the issue:
Below is the sample config which I tried and was able to consume messages successfully
spring:
cloud:
stream:
function:
definition: processFirstConsumer;processSecondConsumer
bindings:
processFirstConsumer-in-0:
group: allocation
destination: userMessage1
binder: rabbit1
processSecondConsumer-in-0:
group: allocation
destination: userMessage2
binder: rabbit2
binder:
rabbit1:
processFirstConsumer-in-0:
consumer:
bindingRoutingKey: routing.key.1
binder:
nodes: address1
rabbit2:
processSecondConsumer-in-0:
consumer:
bindingRoutingKey: routing.key.2
binder:
nodes: address2
binders:
rabbit1:
type: rabbit
environment:
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
virtual-host: /
rabbit2:
type: rabbit
environment:
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
virtual-host: /

Related

Found duplicate key in application.yml

I have been working on a kafka stream project I am new to yaml files I have been using application.properties . Basically I want to use kafka properties as well as spring fox properties here is my
application.yml
spring:
cloud:
function:
definition: consumer;producer
stream:
kafka:
bindings:
producer-out-0:
producer:
configuration:
value.serializer: <packagename>
consumer-in-0:
consumer:
configuration:
value.deserializer: <packagename>
binder:
brokers: localhost:9092
bindings:
producer-out-0:
destination: <topic name>
producer:
useNativeEncoding: true # Enables using the custom serializer
consumer-in-0:
destination: <topic name>
consumer:
use-native-decoding: true # Enables using the custom deserializer
spring:
mvc:
pathmatch:
matching-strategy: ANT_PATH_MATCHER
It shows
in 'reader', line 1, column 1:
spring:
^
found duplicate key spring
in 'reader', line 31, column 1:
spring:
^
I tried add mvc: in between and other things nothing seem to work I dont know how to get spring key as parent to mvc: pathmatch: without ruining above configuration
spring is a duplicate! Your YAML should look like this:
spring:
cloud:
function:
definition: consumer;producer
stream:
kafka:
bindings:
producer-out-0:
producer:
configuration:
value.serializer: <packagename>
consumer-in-0:
consumer:
configuration:
value.deserializer: <packagename>
binder:
brokers: localhost:9092
bindings:
producer-out-0:
destination: <topic name>
producer:
useNativeEncoding: true # Enables using the custom serializer
consumer-in-0:
destination: <topic name>
consumer:
use-native-decoding: true # Enables using the custom deserializer
mvc:
pathmatch:
matching-strategy: ANT_PATH_MATCHER

Spring Cloud Kafka Stream: Publishing to DLQ is failing with Avro

I'm unable to publish to Dlq topic while using ErrorHandlingDeserializer for handling the errors with combination of Avro. Below is the error while publishing.
Topic TOPIC_DLT not present in metadata after 60000 ms.
ERROR KafkaConsumerDestination{consumerDestinationName='TOPIC', partitions=6, dlqName='TOPIC_DLT'}.container-0-C-1 o.s.i.h.LoggingHandler:250 - org.springframework.messaging.MessageHandlingException: error occurred in message handler [org.springframework.cloud.stream.function.FunctionConfiguration$FunctionToDestinationBinder$1#49abe531]; nested exception is java.lang.RuntimeException: failed, failedMessage=GenericMessage
And here is the application.yml
spring:
cloud:
stream:
bindings:
process-in-0:
destination: TOPIC
group: groupID
kafka:
binder:
brokers:
- xxx:9092
configuration:
security.protocol: SASL_SSL
sasl.mechanism: PLAIN
jaas:
loginModule: org.apache.kafka.common.security.plain.PlainLoginModule
options:
username: username
password: pwd
consumer-properties:
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
value.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.deserializer.value.delegate.class: io.confluent.kafka.serializers.KafkaAvroDeserializer
producer-properties:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
bindings:
process-in-0:
consumer:
configuration:
basic.auth.credentials.source: USER_INFO
schema.registry.url: registryUrl
schema.registry.basic.auth.user.info: user:pwd
security.protocol: SASL_SSL
sasl.mechanism: PLAIN
max-attempts: 1
dlqProducerProperties:
configuration:
basic.auth.credentials.source: USER_INFO
schema.registry.url: registryUrl
schema.registry.basic.auth.user.info: user:pwd
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
deserializationExceptionHandler: sendToDlq
ackEachRecord: true
enableDlq: true
dlqName: TOPIC_DLT
autoCommitOnError: true
autoCommitOffset: true
I'm using the following dependencies:
spring-cloud-dependencies - 2021.0.1
spring-boot-starter-parent - 2.6.3
spring-cloud-stream-binder-kafka
kafka-schema-registry-client - 5.3.0
kafka-avro-serializer - 5.3.0
Im not sure what exactly im missing.
After going through a lot of documentation, I found out that for spring to do the job of posting DLQ, we need to have the same number of partitions for both Original topic and DLT Topic. And if it can't be done then we need to set dlqPartitions to 1 or manually provide the DlqPartitionFunction bean. By providing dlqPartitions: 1 all the messages will go to partition 0.

spring-cloud-stream-binder-kafka - Unable to create multiple kafka binders with ssl configuration

I am trying to connect to a kafka cluster through SASL_SSL protocol with jaas config as follows:
spring:
cloud:
stream:
bindings:
binding-1:
binder: kafka-1-with-ssl
destination: <destination-1>
content-type: text/plain
group: <group-id-1>
consumer:
header-mode: headers
binding-2:
binder: kafka-2-with-ssl
destination: <destination-2>
content-type: text/plain
group: <group-id-2>
consumer:
header-mode: headers
binders:
kafka-1-with-ssl:
type: kafka
defaultCandidate: false
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <broker-hostnames-1>
configuration:
ssl:
truststore:
location: <location-1>
password: <ts-password-1>
type: JKS
jaas:
loginModule: org.apache.kafka.common.security.scram.ScramLoginModule
options:
username: <username-1>
password: <password-1>
kafka-2-with-ssl:
type: kafka
defaultCandidate: false
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <broker-hostnames-2>
configuration:
ssl:
truststore:
location: <location-2>
password: <ts-password-2>
type: JKS
jaas:
loginModule: org.apache.kafka.common.security.scram.ScramLoginModule
options:
username: <username-2>
password: <password-2>
kafka:
binder:
configuration:
security:
protocol: SASL_SSL
sasl:
mechanism: SCRAM-SHA-256
The above configuration is inline with the sample config available on the spring-cloud-stream's official git repo.
similar issue raised on the library's git repo says it's fixed in latest versions but doesn't seem so. Getting the following error:
springBootVersion: 2.2.8 and
spring-cloud-stream-dependencies version - Horsham.SR6.
Failed to create consumer binding; retrying in 30 seconds | org.springframework.cloud.stream.binder.BinderException: Exception thrown while starting consumer:
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:461)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:90)
at org.springframework.cloud.stream.binder.AbstractBinder.bindConsumer(AbstractBinder.java:143)
at org.springframework.cloud.stream.binding.BindingService.lambda$rescheduleConsumerBinding$1(BindingService.java:201)
at org.springframework.cloud.sleuth.instrument.async.TraceRunnable.run(TraceRunnable.java:68)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.KafkaException: Failed to create new KafkaAdminClient
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:407)
at org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:65)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createAdminClient(KafkaTopicProvisioner.java:246)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.doProvisionConsumerDestination(KafkaTopicProvisioner.java:216)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionConsumerDestination(KafkaTopicProvisioner.java:183)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionConsumerDestination(KafkaTopicProvisioner.java:79)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindConsumer(AbstractMessageChannelBinder.java:402)
... 12 common frames omitted
Caused by: org.apache.kafka.common.KafkaException: javax.security.auth.login.LoginException: KrbException: Cannot locate default realm
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:160)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:67)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:99)
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:382)
... 18 common frames omitted
Caused by: javax.security.auth.login.LoginException: KrbException: Cannot locate default realm
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:804)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680)
at javax.security.auth.login.LoginContext.login(LoginContext.java:587)
at org.apache.kafka.common.security.authenticator.AbstractLogin.login(AbstractLogin.java:60)
at org.apache.kafka.common.security.authenticator.LoginManager.<init>(LoginManager.java:61)
at org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:111)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:149)
... 22 common frames omitted
Caused by: sun.security.krb5.RealmException: KrbException: Cannot locate default realm
at sun.security.krb5.Realm.getDefault(Realm.java:68)
at sun.security.krb5.PrincipalName.<init>(PrincipalName.java:462)
at sun.security.krb5.PrincipalName.<init>(PrincipalName.java:471)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:706)
... 38 common frames omitted
Caused by: sun.security.krb5.KrbException: Cannot locate default realm
at sun.security.krb5.Config.getDefaultRealm(Config.java:1029)
at sun.security.krb5.Realm.getDefault(Realm.java:64)
... 41 common frames omitted
Which makes me think that the library is not picking up the config props properly because jaas.loginModule is specified as ScramLoginModule but it's using Krb5LoginModule to authenticate.
But well, it's striking to find that when the configuration is done as follows (the difference lies in the last part with ssl credentials outside binder's environment), it connects to the binder which is specified in the global ssl props(outside the binder's env) and silently ignores the other binder without showing any error logs.
Say if password credentials of the binder kafka-2-with-ssl were specified in the global ssl props, that binder is created and the bindings subscribed to that binder start consuming the events. But this is useful only when we need to create single binder.
spring:
cloud:
stream:
bindings:
binding-1:
binder: kafka-1-with-ssl
destination: <destination-1>
content-type: text/plain
group: <group-id-1>
consumer:
header-mode: headers
binding-2:
binder: kafka-2-with-ssl
destination: <destination-2>
content-type: text/plain
group: <group-id-2>
consumer:
header-mode: headers
binders:
kafka-1-with-ssl:
type: kafka
defaultCandidate: false
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <broker-hostnames-1>
configuration:
ssl:
truststore:
location: <location-1>
password: <ts-password-1>
type: JKS
jaas:
loginModule: org.apache.kafka.common.security.scram.ScramLoginModule
options:
username: <username-1>
password: <password-1>
kafka-2-with-ssl:
type: kafka
defaultCandidate: false
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: <broker-hostnames-2>
configuration:
ssl:
truststore:
location: <location-2>
password: <ts-password-2>
type: JKS
jaas:
loginModule: org.apache.kafka.common.security.scram.ScramLoginModule
options:
username: <username-2>
password: <password-2>
kafka:
binder:
configuration:
security:
protocol: SASL_SSL
sasl:
mechanism: SCRAM-SHA-256
ssl:
truststore:
location: <location-2>
password: <ts-password-2>
type: JKS
jaas:
loginModule: org.apache.kafka.common.security.scram.ScramLoginModule
options:
username: <username-2>
password: <password-2>
Assure you that nothing is wrong with the ssl credentials. Tested diligently with either of the ssl-kafka-binder successfully getting created individually. The aim is to connect to multiple kafka binders with SASL_SSL protocol. Thanks in advance.
I think you may want to follow the solutions implemented in KIP-85 for this issue. Instead of using the Spring Cloud Stream Kafka binder provided JAAS configuration or setting the java.security.auth.login.config property, use the sasl.jaas.config property which takes precedence over other methods. By using sasl.jaas.config, you can override the restriction placed by JVM in which a JVM-wide static security context is used, thus ignoring any subsequent JAAS configurations found after the first one.
Here is a sample application that demonstrates how to connect to multiple Kafka clusters with different security contexts as a multi-binder application.

Spring Boot application using the Spring Cloud Stream Kafka Binder + Kafka Streams Binder not working - Producer doesn't send messages

My Spring Boot 2.3.1 app with SCS Hoshram.SR6 was using the Kafka Streams Binder. I needed to add a Kafka Producer that would be used in another part of the application so I added the kafka binder. The problem is the producer is not working, throwing an exception:
19:49:40.082 [scheduling-1] [900cdeb11106e199] ERROR o.s.c.stream.binding.BindingService - Failed to create producer binding; retrying in 30 seconds
org.springframework.cloud.stream.provisioning.ProvisioningException: Provisioning exception; nested exception is java.util.concurrent.TimeoutException
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopic(KafkaTopicProvisioner.java:332)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionProducerDestination(KafkaTopicProvisioner.java:148)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionProducerDestination(KafkaTopicProvisioner.java:79)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:222)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:90)
at org.springframework.cloud.stream.binder.AbstractBinder.bindProducer(AbstractBinder.java:152)
at org.springframework.cloud.stream.binding.BindingService.lambda$rescheduleProducerBinding$4(BindingService.java:336)
at org.springframework.cloud.sleuth.instrument.async.TraceRunnable.run(TraceRunnable.java:68)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: java.util.concurrent.TimeoutException: null
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopicAndPartitions(KafkaTopicProvisioner.java:368)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopicIfNecessary(KafkaTopicProvisioner.java:342)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopic(KafkaTopicProvisioner.java:319)
This is my configuration:
spring:
cloud:
function:
definition: myProducer
stream:
bindings:
myKStream-in-0:
destination: my-kstream-topic
producer:
useNativeEncoding: true
myProducer-out-0:
destination: producer-topic
producer:
useNativeEncoding: true
kafka:
binder:
brokers: ${kafka.brokers:localhost}
min-partition-count: 3
replication-factor: 3
producerProperties:
enable:
idempotence: true
retries: 0x7fffffff
acks: all
key:
serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
value:
serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
schema:
registry:
url: ${schema-registry.url:http://localhost:8081}
request:
timeout:
ms: 5000
streams:
binder:
brokers: ${kafka.brokers:localhost}
configuration:
application:
id: ${spring.application.name}
server: ${POD_IP:localhost}:${local.server.port:8080}
schema:
registry:
url: ${schema-registry.url}
key:
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
value:
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
processing:
guarantee: exactly_once
replication:
factor: 3
group:
id: kpi
deserialization-exception-handler: logandcontinue
min-partition-count: 3
replication-factor: 3
state-store-retry:
max-attempts: 20
backoff-period: 1500
What could be the problem here?
UPDATE
I tweaked the config as follows:
spring:
cloud:
function:
definition: myProducer
stream:
function:
definition: myKStream
Now I can't see any exception but the messages don't get to the topic.
In another application that only uses the kafka binder it works perfectly:
#Configuration
class KafkaProducerConfiguration {
#Bean
fun myProducerProcessor(): EmitterProcessor<Message<XXX>> {
return EmitterProcessor.create()
}
#Bean
fun myProducer(): Supplier<Flux<Message<XXX>>> {
return Supplier { myProducerProcessor() }
}
}
...
#Component
class XXXProducer(#Qualifier("myProducerProcessor") private val myProducerProcessor: EmitterProcessor<Message<XXX>>) {
fun send(...): Mono<Void> {
return Mono.defer {
myProducerProcessor.onNext(message)
Mono.empty()
}
}
UPDATE 2
I set logging.level.org.springframework.cloud.stream: debug
In the logs the following trace shows up:
o.s.c.s.binder.DefaultBinderFactory - Creating binder: kstream
However there isn't anything about a Creating binder: kafka.
I was missing the multiple binder config for kafka and kstreams (https://cloud.spring.io/spring-cloud-static/spring-cloud-stream-binder-kafka/3.0.0.RELEASE/reference/html/spring-cloud-stream-binder-kafka.html#_multi_binders_with_kafka_streams_based_binders_and_regular_kafka_binder)
Thus, I had to set up: spring.cloud.stream.function.definition=myProducer;myKStream

Spring Cloud Stream Kafka Multiple Binding

I am using Spring Cloud Stream Kafka binder to consume messages from Kafka. I am able to make my sample work with a single Kafka Binder as below
spring:
cloud:
stream:
kafka:
binder:
consumer-properties: {enable.auto.commit: true}
auto-create-topics: false
brokers: <broker url>
bindings:
consumer:
destination: some-topic
group: testconsumergroup
consumer:
concurrency: 1
valueSerde: JsonSerde
producer:
destination: some-other-topic
producer:
valueSerde: JsonSerde
Note that both the bindings are to the same Kafka Broker here. However, I have a situation where I need to publish to a topic in some Kafka Cluster and also consume from another topic in a different Kafka Cluster. How should I change my configuration to be able to bind to different Kafka Clusters?
I tried something like this
spring:
cloud:
stream:
binders:
defaultbinder:
type: kafka
environment:
spring.cloud.stream.kafka.streams.binder.brokers: <cluster1-brokers>
kafka1:
type: kafka
environment:
spring.cloud.stream.kafka.streams.binder.brokers: <cluster2-brokers>
bindings:
consumer:
binder: kafka1
destination: some-topic
group: testconsumergroup
consumer:
concurrency: 1
valueSerde: JsonSerde
producer:
binder: defaultbinder
destination: some-topic
producer:
valueSerde: JsonSerde
kafka:
binder:
consumer-properties: {enable.auto.commit: true}
auto-create-topics: false
brokers: <cluster1-brokers>
and
spring:
cloud:
stream:
binders:
defaultbinder:
type: kafka
environment:
spring.cloud.stream.kafka.streams.binder.brokers: <cluster1-brokers>
kafka1:
type: kafka
environment:
spring.cloud.stream.kafka.streams.binder.brokers: <cluster2-brokers>
kafka:
bindings:
consumer:
binder: kafka1
destination: some-topic
group: testconsumergroup
consumer:
concurrency: 1
valueSerde: JsonSerde
producer:
binder: defaultbinder
destination: some-topic
producer:
valueSerde: JsonSerde
kafka:
binder:
consumer-properties: {enable.auto.commit: true}
auto-create-topics: false
brokers: <cluster1-brokers>
But both of them dont seem to work.
The first configuration seems to be invalid.
For the second configuration I get the below error
Caused by: java.lang.IllegalStateException: A default binder has been requested, but there is more than one binder available for 'org.springframework.cloud.stream.messaging.DirectWithAttributesChannel' : kafka1,defaultbinder, and no default binder has been set.
I am using the dependency 'org.springframework.cloud:spring-cloud-starter-stream-kafka:3.0.1.RELEASE' and Spring Boot 2.2.6
Please let me know how to configure multiple bindings for Kafka using Spring Cloud Stream
Update
Tried this configuration below
spring:
cloud:
stream:
binders:
kafka2:
type: kafka
environment:
spring.cloud.stream.kafka.binder.brokers: <cluster2-brokers>
kafka1:
type: kafka
environment:
spring.cloud.stream.kafka.binder.brokers: <cluster1-brokers>
bindings:
consumer:
destination: <some-topic>
binder: kafka1
group: testconsumergroup
content-type: application/json
nativeEncoding: true
consumer:
concurrency: 1
valueSerde: JsonSerde
producer:
destination: some-topic
binder: kafka2
contentType: application/json
nativeEncoding: true
producer:
valueSerde: JsonSerde
The Message Streams and EventHubBinding is as follows
public interface MessageStreams {
String PRODUCER = "producer";
String CONSUMER = "consumer;
#Output(PRODUCER)
MessageChannel producerChannel();
#Input(CONSUMER)
SubscribableChannel consumerChannel()
}
#EnableBinding(MessageStreams.class)
public class EventHubStreamsConfiguration {
}
My Producer class looks like below
#Component
#Slf4j
public class EventPublisher {
private final MessageStreams messageStreams;
public EventPublisher(MessageStreams messageStreams) {
this.messageStreams = messageStreams;
}
public boolean publish(CustomMessage event) {
MessageChannel messageChannel = getChannel();
MessageBuilder messageBuilder = MessageBuilder.withPayload(event);
boolean messageSent = messageChannel.send(messageBuilder.build());
return messageSent;
}
protected MessageChannel getChannel() {
return messageStreams.producerChannel();
}
}
And Consumer class looks like below
#Component
#Slf4j
public class EventHandler {
private final MessageStreams messageStreams;
public EventHandler(MessageStreams messageStreams) {
this.messageStreams = messageStreams;
}
#StreamListener(MessageStreams.CONSUMER)
public void handleEvent(Message<CustomMessage> message) throws Exception
{
// process the event
}
#Override
#ServiceActivator(inputChannel = "some-topic.testconsumergroup.errors")
protected void handleError(ErrorMessage errorMessage) throws Exception {
// handle error;
}
}
I am getting the below error while trying to publish and consume the messages from my test.
Dispatcher has no subscribers for channel 'application.producer'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage [payload=byte[104], headers={contentType=application/json, timestamp=1593517340422}]
Am i missing anything? For a single cluster, i am able to publish and consume messages. The issue is only happening with multiple cluster bindings

Resources