Spring Cloud Stream Kafka Multiple Binding - spring-boot

I am using Spring Cloud Stream Kafka binder to consume messages from Kafka. I am able to make my sample work with a single Kafka Binder as below
spring:
cloud:
stream:
kafka:
binder:
consumer-properties: {enable.auto.commit: true}
auto-create-topics: false
brokers: <broker url>
bindings:
consumer:
destination: some-topic
group: testconsumergroup
consumer:
concurrency: 1
valueSerde: JsonSerde
producer:
destination: some-other-topic
producer:
valueSerde: JsonSerde
Note that both the bindings are to the same Kafka Broker here. However, I have a situation where I need to publish to a topic in some Kafka Cluster and also consume from another topic in a different Kafka Cluster. How should I change my configuration to be able to bind to different Kafka Clusters?
I tried something like this
spring:
cloud:
stream:
binders:
defaultbinder:
type: kafka
environment:
spring.cloud.stream.kafka.streams.binder.brokers: <cluster1-brokers>
kafka1:
type: kafka
environment:
spring.cloud.stream.kafka.streams.binder.brokers: <cluster2-brokers>
bindings:
consumer:
binder: kafka1
destination: some-topic
group: testconsumergroup
consumer:
concurrency: 1
valueSerde: JsonSerde
producer:
binder: defaultbinder
destination: some-topic
producer:
valueSerde: JsonSerde
kafka:
binder:
consumer-properties: {enable.auto.commit: true}
auto-create-topics: false
brokers: <cluster1-brokers>
and
spring:
cloud:
stream:
binders:
defaultbinder:
type: kafka
environment:
spring.cloud.stream.kafka.streams.binder.brokers: <cluster1-brokers>
kafka1:
type: kafka
environment:
spring.cloud.stream.kafka.streams.binder.brokers: <cluster2-brokers>
kafka:
bindings:
consumer:
binder: kafka1
destination: some-topic
group: testconsumergroup
consumer:
concurrency: 1
valueSerde: JsonSerde
producer:
binder: defaultbinder
destination: some-topic
producer:
valueSerde: JsonSerde
kafka:
binder:
consumer-properties: {enable.auto.commit: true}
auto-create-topics: false
brokers: <cluster1-brokers>
But both of them dont seem to work.
The first configuration seems to be invalid.
For the second configuration I get the below error
Caused by: java.lang.IllegalStateException: A default binder has been requested, but there is more than one binder available for 'org.springframework.cloud.stream.messaging.DirectWithAttributesChannel' : kafka1,defaultbinder, and no default binder has been set.
I am using the dependency 'org.springframework.cloud:spring-cloud-starter-stream-kafka:3.0.1.RELEASE' and Spring Boot 2.2.6
Please let me know how to configure multiple bindings for Kafka using Spring Cloud Stream
Update
Tried this configuration below
spring:
cloud:
stream:
binders:
kafka2:
type: kafka
environment:
spring.cloud.stream.kafka.binder.brokers: <cluster2-brokers>
kafka1:
type: kafka
environment:
spring.cloud.stream.kafka.binder.brokers: <cluster1-brokers>
bindings:
consumer:
destination: <some-topic>
binder: kafka1
group: testconsumergroup
content-type: application/json
nativeEncoding: true
consumer:
concurrency: 1
valueSerde: JsonSerde
producer:
destination: some-topic
binder: kafka2
contentType: application/json
nativeEncoding: true
producer:
valueSerde: JsonSerde
The Message Streams and EventHubBinding is as follows
public interface MessageStreams {
String PRODUCER = "producer";
String CONSUMER = "consumer;
#Output(PRODUCER)
MessageChannel producerChannel();
#Input(CONSUMER)
SubscribableChannel consumerChannel()
}
#EnableBinding(MessageStreams.class)
public class EventHubStreamsConfiguration {
}
My Producer class looks like below
#Component
#Slf4j
public class EventPublisher {
private final MessageStreams messageStreams;
public EventPublisher(MessageStreams messageStreams) {
this.messageStreams = messageStreams;
}
public boolean publish(CustomMessage event) {
MessageChannel messageChannel = getChannel();
MessageBuilder messageBuilder = MessageBuilder.withPayload(event);
boolean messageSent = messageChannel.send(messageBuilder.build());
return messageSent;
}
protected MessageChannel getChannel() {
return messageStreams.producerChannel();
}
}
And Consumer class looks like below
#Component
#Slf4j
public class EventHandler {
private final MessageStreams messageStreams;
public EventHandler(MessageStreams messageStreams) {
this.messageStreams = messageStreams;
}
#StreamListener(MessageStreams.CONSUMER)
public void handleEvent(Message<CustomMessage> message) throws Exception
{
// process the event
}
#Override
#ServiceActivator(inputChannel = "some-topic.testconsumergroup.errors")
protected void handleError(ErrorMessage errorMessage) throws Exception {
// handle error;
}
}
I am getting the below error while trying to publish and consume the messages from my test.
Dispatcher has no subscribers for channel 'application.producer'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage [payload=byte[104], headers={contentType=application/json, timestamp=1593517340422}]
Am i missing anything? For a single cluster, i am able to publish and consume messages. The issue is only happening with multiple cluster bindings

Related

Spring Cloud Kafka Stream: Publishing to DLQ is failing with Avro

I'm unable to publish to Dlq topic while using ErrorHandlingDeserializer for handling the errors with combination of Avro. Below is the error while publishing.
Topic TOPIC_DLT not present in metadata after 60000 ms.
ERROR KafkaConsumerDestination{consumerDestinationName='TOPIC', partitions=6, dlqName='TOPIC_DLT'}.container-0-C-1 o.s.i.h.LoggingHandler:250 - org.springframework.messaging.MessageHandlingException: error occurred in message handler [org.springframework.cloud.stream.function.FunctionConfiguration$FunctionToDestinationBinder$1#49abe531]; nested exception is java.lang.RuntimeException: failed, failedMessage=GenericMessage
And here is the application.yml
spring:
cloud:
stream:
bindings:
process-in-0:
destination: TOPIC
group: groupID
kafka:
binder:
brokers:
- xxx:9092
configuration:
security.protocol: SASL_SSL
sasl.mechanism: PLAIN
jaas:
loginModule: org.apache.kafka.common.security.plain.PlainLoginModule
options:
username: username
password: pwd
consumer-properties:
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
value.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.deserializer.value.delegate.class: io.confluent.kafka.serializers.KafkaAvroDeserializer
producer-properties:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
bindings:
process-in-0:
consumer:
configuration:
basic.auth.credentials.source: USER_INFO
schema.registry.url: registryUrl
schema.registry.basic.auth.user.info: user:pwd
security.protocol: SASL_SSL
sasl.mechanism: PLAIN
max-attempts: 1
dlqProducerProperties:
configuration:
basic.auth.credentials.source: USER_INFO
schema.registry.url: registryUrl
schema.registry.basic.auth.user.info: user:pwd
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
deserializationExceptionHandler: sendToDlq
ackEachRecord: true
enableDlq: true
dlqName: TOPIC_DLT
autoCommitOnError: true
autoCommitOffset: true
I'm using the following dependencies:
spring-cloud-dependencies - 2021.0.1
spring-boot-starter-parent - 2.6.3
spring-cloud-stream-binder-kafka
kafka-schema-registry-client - 5.3.0
kafka-avro-serializer - 5.3.0
Im not sure what exactly im missing.
After going through a lot of documentation, I found out that for spring to do the job of posting DLQ, we need to have the same number of partitions for both Original topic and DLT Topic. And if it can't be done then we need to set dlqPartitions to 1 or manually provide the DlqPartitionFunction bean. By providing dlqPartitions: 1 all the messages will go to partition 0.

Spring Cloud Stream connect to multiple hosts for single binder (RabbitMQ)

we are using Spring Cloud Stream to listen to rabbitMQ multiple queues, especially the SCF model
The spring-cloud-stream-reactive module is deprecated in favor of native support via Spring Cloud Function programming model.
by the time there was a single node/host it was working good (application.yml snippet shared below),
however the moment we try to connect multiple nodes it is failing,
Can someone guide how to connect the same
or have some sample related to Spring Cloud Documentation
Following Code is working as expected
spring:
cloud:
stream:
function:
definition: function1;function2;function3
bindings:
function1-in-0:
group: allocation
destination: destinationExchange
binder: rabbit
function2-in-0:
group: allocation
destination: destinationExchange
binder: rabbit
function3-in-0:
group: allocation
destination: destinationExchange
binder: rabbit
rabbit:
bindings:
function1-in-0:
consumer:
bindingRoutingKey: routing.key.1
function2-in-0:
consumer:
bindingRoutingKey: routing.key.2
function3-in-0:
consumer:
bindingRoutingKey: routing.key.3
binder:
nodes: address1
Basically it need to be something like following
spring:
cloud:
stream:
function:
definition: function1;function2;function3
bindings:
function1-in-0:
group: allocation
destination: destinationExchange
binder: rabbit1
function2-in-0:
group: allocation
destination: destinationExchange
binder: rabbit2
function3-in-0:
group: allocation
destination: destinationExchange
binder: rabbit3
binder:
rabbit1:
function1-in-0:
consumer:
bindingRoutingKey: routing.key.1
binder:
nodes: address1
rabbit2:
function2-in-0:
consumer:
bindingRoutingKey: routing.key.2
binder:
nodes: address2
rabbit3:
function3-in-0:
consumer:
bindingRoutingKey: routing.key.3
binder:
nodes: address3
with following addition itself
binders:
rabbit1:
type: rabbit
environment:
spring.spring.cloud.stream.kafka:
binder:
nodes: localhost
i am getting this error
o.s.boot.SpringApplication : Application run failed
org.springframework.context.ApplicationContextException: Failed to start bean 'inputBindingLifecycle'; nested exception is java.lang.IllegalStateException: Unknown binder configuration: rabbit
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:181) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:54) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356) ~[spring-context-5.3.8.jar:5.3.8]
at java.base/java.lang.Iterable.forEach(Iterable.java:75) ~[na:na]
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:155) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:123) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:935) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:586) ~[spring-context-5.3.8.jar:5.3.8]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145) ~[spring-boot-2.5.2.jar:2.5.2]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) ~[spring-boot-2.5.2.jar:2.5.2]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:434) ~[spring-boot-2.5.2.jar:2.5.2]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:338) ~[spring-boot-2.5.2.jar:2.5.2]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1343) ~[spring-boot-2.5.2.jar:2.5.2]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1332) ~[spring-boot-2.5.2.jar:2.5.2]
at com.gap.pem.Application.main(Application.java:14) ~[main/:na]
Caused by: java.lang.IllegalStateException: Unknown binder configuration: rabbit
at org.springframework.util.Assert.state(Assert.java:76) ~[spring-core-5.3.8.jar:5.3.8]
at org.springframework.cloud.stream.binder.DefaultBinderFactory.getBinderInstance(DefaultBinderFactory.java:255) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binder.DefaultBinderFactory.doGetBinder(DefaultBinderFactory.java:224) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binder.DefaultBinderFactory.getBinder(DefaultBinderFactory.java:152) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binding.BindingService.getBinder(BindingService.java:386) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binding.BindingService.bindConsumer(BindingService.java:103) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binding.AbstractBindableProxyFactory.createAndBindInputs(AbstractBindableProxyFactory.java:118) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binding.InputBindingLifecycle.doStartWithBindable(InputBindingLifecycle.java:58) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at java.base/java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608) ~[na:na]
at org.springframework.cloud.stream.binding.AbstractBindingLifecycle.start(AbstractBindingLifecycle.java:57) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.cloud.stream.binding.InputBindingLifecycle.start(InputBindingLifecycle.java:34) ~[spring-cloud-stream-3.1.3.jar:3.1.3]
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178) ~[spring-context-5.3.8.jar:5.3.8]
... 14 common frames omitted
Process finished with exit code 1
we have following dependencies available
implementation 'org.springframework.cloud:spring-cloud-stream'
implementation 'org.springframework.cloud:spring-cloud-stream-binder-kafka'
implementation 'org.springframework.cloud:spring-cloud-stream-binder-rabbit'
I struggled a lot getting this to run. Especially with routing keys. In the end my solution was a bitt different than shown here. In hopes that it will help someone in the future :
spring:
cloud:
stream:
binders:
rabbit1:
type: rabbit
environment:
spring:
rabbitmq:
addresses: some.server.com:5672,some.other.server.com:5672
username: someUserName
password: someUserPassword
rabbit2:
type: rabbit
environment:
spring:
rabbitmq:
addresses: some.rabbit2.server.us:5672,someother.rabbit2.server.de:5672
username: secondRabbitUserName
password: secondRabbitPassword
function:
definition: someReceiver;anotherRec
rabbit:
bindings:
someReceiver-in-0:
consumer:
auto-bind-dlq: true
republishToDlq: true
bindingRoutingKey: some.routing.key.#
anotherRec-in-0:
consumer:
bindingRoutingKey: finished.#
firstAction-out-0:
producer:
routingKeyExpression: "'switch.'+headers.someValue+'.'+headers.someOtherValue"
bindingRoutingKey: switch.#
userNotification-out-0:
producer:
routingKeyExpression: "'switch.someKeyExpression'"
bindingRoutingKey: switch.#
bindings:
anotherRec-in-0:
destination: reciving.exchange.name
group: some-queue-name-v7
binder: rabbit1
firstAction-out-0:
destination: some.exhange.name.1
binder: rabbit1
someReceiver-in-0:
destination: another.exchange.name.1
group: queueName
binder: rabbit2
userNotification-out-0:
destination: the.exchange.name
binder: rabbit2
Upon adding the binders config for both rabbit1 and rabbit2 it resolved the issue:
Below is the sample config which I tried and was able to consume messages successfully
spring:
cloud:
stream:
function:
definition: processFirstConsumer;processSecondConsumer
bindings:
processFirstConsumer-in-0:
group: allocation
destination: userMessage1
binder: rabbit1
processSecondConsumer-in-0:
group: allocation
destination: userMessage2
binder: rabbit2
binder:
rabbit1:
processFirstConsumer-in-0:
consumer:
bindingRoutingKey: routing.key.1
binder:
nodes: address1
rabbit2:
processSecondConsumer-in-0:
consumer:
bindingRoutingKey: routing.key.2
binder:
nodes: address2
binders:
rabbit1:
type: rabbit
environment:
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
virtual-host: /
rabbit2:
type: rabbit
environment:
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest
virtual-host: /

Spring-cloud kafka stream schema registry

I am trying to transform with functionnal programming (and spring cloud stream) an input AVRO message from an input topic, and publish a new message on an output topic.
Here is my transform function :
#Bean
public Function<KStream<String, Data>, KStream<String, Double>> evenNumberSquareProcessor() {
return kStream -> kStream.transform(() -> new CustomProcessor(STORE_NAME), STORE_NAME);
}
The CustomProcessor is a class that implements the "Transformer" interface.
I have tried the transformation with non AVRO input and it works fine.
My difficulties is how to declare the schema registry in the application.yaml file or in the the spring application.
I have tried a lot of different configurations (it seems difficult to find the right documentation) and each time the application don't find the settings for the schema.registry.url. I have the following error :
Error creating bean with name 'kafkaStreamsFunctionProcessorInvoker':
Invocation of init method failed; nested exception is
java.lang.IllegalStateException:
org.apache.kafka.common.config.ConfigException: Missing required
configuration "schema.registry.url" which has no default value.
Here is my application.yml file :
spring:
cloud:
stream:
function:
definition: evenNumberSquareProcessor
bindings:
evenNumberSquareProcessor-in-0:
destination: input
content-type: application/*+avro
group: group-1
evenNumberSquareProcessor-out-0:
destination: output
kafka:
binder:
brokers: my-cluster-kafka-bootstrap.kafka:9092
consumer-properties:
value.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
schema.registry.url: http://localhost:8081
I have tried this configuration too :
spring:
cloud:
stream:
kafka:
streams:
binder:
brokers: my-cluster-kafka-bootstrap.kafka:9092
configuration:
schema.registry.url: http://localhost:8081
default.value.serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
bindings:
evenNumberSquareProcessor-in-0:
consumer:
destination: input
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
evenNumberSquareProcessor-out-0:
destination: output
My spring boot application is declared in this way, with the activation of the schema registry client :
#EnableSchemaRegistryClient
#SpringBootApplication
public class TransformApplication {
public static void main(String[] args) {
SpringApplication.run(TransformApplication.class, args);
}
}
Thanks for any help you could bring to me.
Regards
CG
Configure the schema registry under the configuration then it will be available to all binders. By the way. The avro serializer is under the bindings and the specific channel. If you want use the default property default.value.serde:. Your Serde might be the wrong too.
spring:
cloud:
stream:
kafka:
streams:
binder:
brokers: localhost:9092
configuration:
schema.registry.url: http://localhost:8081
default.value.serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
bindings:
process-in-0:
consumer:
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
Don't use the #EnableSchemaRegistryClient. Enable the schema registry on the Avro Serde. In this example, I am using the bean Data of your definition. Try to follow this example here.
#Service
public class CustomSerdes extends Serdes {
private final static Map<String, String> serdeConfig = Stream.of(
new AbstractMap.SimpleEntry<>(SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8081"))
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
public static Serde<Data> DataAvro() {
final Serde<Data> dataAvroSerde = new SpecificAvroSerde<>();
dataAvroSerde.configure(serdeConfig, false);
return dataAvroSerde;
}
}

Spring Boot application using the Spring Cloud Stream Kafka Binder + Kafka Streams Binder not working - Producer doesn't send messages

My Spring Boot 2.3.1 app with SCS Hoshram.SR6 was using the Kafka Streams Binder. I needed to add a Kafka Producer that would be used in another part of the application so I added the kafka binder. The problem is the producer is not working, throwing an exception:
19:49:40.082 [scheduling-1] [900cdeb11106e199] ERROR o.s.c.stream.binding.BindingService - Failed to create producer binding; retrying in 30 seconds
org.springframework.cloud.stream.provisioning.ProvisioningException: Provisioning exception; nested exception is java.util.concurrent.TimeoutException
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopic(KafkaTopicProvisioner.java:332)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionProducerDestination(KafkaTopicProvisioner.java:148)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionProducerDestination(KafkaTopicProvisioner.java:79)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:222)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:90)
at org.springframework.cloud.stream.binder.AbstractBinder.bindProducer(AbstractBinder.java:152)
at org.springframework.cloud.stream.binding.BindingService.lambda$rescheduleProducerBinding$4(BindingService.java:336)
at org.springframework.cloud.sleuth.instrument.async.TraceRunnable.run(TraceRunnable.java:68)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: java.util.concurrent.TimeoutException: null
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopicAndPartitions(KafkaTopicProvisioner.java:368)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopicIfNecessary(KafkaTopicProvisioner.java:342)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopic(KafkaTopicProvisioner.java:319)
This is my configuration:
spring:
cloud:
function:
definition: myProducer
stream:
bindings:
myKStream-in-0:
destination: my-kstream-topic
producer:
useNativeEncoding: true
myProducer-out-0:
destination: producer-topic
producer:
useNativeEncoding: true
kafka:
binder:
brokers: ${kafka.brokers:localhost}
min-partition-count: 3
replication-factor: 3
producerProperties:
enable:
idempotence: true
retries: 0x7fffffff
acks: all
key:
serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
value:
serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
schema:
registry:
url: ${schema-registry.url:http://localhost:8081}
request:
timeout:
ms: 5000
streams:
binder:
brokers: ${kafka.brokers:localhost}
configuration:
application:
id: ${spring.application.name}
server: ${POD_IP:localhost}:${local.server.port:8080}
schema:
registry:
url: ${schema-registry.url}
key:
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
value:
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
processing:
guarantee: exactly_once
replication:
factor: 3
group:
id: kpi
deserialization-exception-handler: logandcontinue
min-partition-count: 3
replication-factor: 3
state-store-retry:
max-attempts: 20
backoff-period: 1500
What could be the problem here?
UPDATE
I tweaked the config as follows:
spring:
cloud:
function:
definition: myProducer
stream:
function:
definition: myKStream
Now I can't see any exception but the messages don't get to the topic.
In another application that only uses the kafka binder it works perfectly:
#Configuration
class KafkaProducerConfiguration {
#Bean
fun myProducerProcessor(): EmitterProcessor<Message<XXX>> {
return EmitterProcessor.create()
}
#Bean
fun myProducer(): Supplier<Flux<Message<XXX>>> {
return Supplier { myProducerProcessor() }
}
}
...
#Component
class XXXProducer(#Qualifier("myProducerProcessor") private val myProducerProcessor: EmitterProcessor<Message<XXX>>) {
fun send(...): Mono<Void> {
return Mono.defer {
myProducerProcessor.onNext(message)
Mono.empty()
}
}
UPDATE 2
I set logging.level.org.springframework.cloud.stream: debug
In the logs the following trace shows up:
o.s.c.s.binder.DefaultBinderFactory - Creating binder: kstream
However there isn't anything about a Creating binder: kafka.
I was missing the multiple binder config for kafka and kstreams (https://cloud.spring.io/spring-cloud-static/spring-cloud-stream-binder-kafka/3.0.0.RELEASE/reference/html/spring-cloud-stream-binder-kafka.html#_multi_binders_with_kafka_streams_based_binders_and_regular_kafka_binder)
Thus, I had to set up: spring.cloud.stream.function.definition=myProducer;myKStream

Kafka producer JSON serialization

I'm trying to use Spring Cloud Stream to integrate with Kafka. The message being written is a Java POJO and while it works as expected (the message is being written to the topic and I can read off with a consumer app), there are some unknown characters being added to the start of the message which are causing trouble when trying to integrate Kafka Connect to sink the messages from the topic.
With the default setup this is the message being pushed to Kafka:
 contentType "text/plain"originalContentType "application/json;charset=UTF-8"{"payload":{"username":"john"},"metadata":{"eventName":"Login","sessionId":"089acf50-00bd-47c9-8e49-dc800c1daf50","username":"john","hasSent":null,"createDate":1511186145471,"version":null}}
If I configure the Kafka producer within the Java app then the message is written to the topic without the leading characters / headers:
#Configuration
public class KafkaProducerConfig {
#Bean
public ProducerFactory<String, Object> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
"localhost:9092");
configProps.put(
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
configProps.put(
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
JsonSerializer.class);
return new DefaultKafkaProducerFactory<String, Object>(configProps);
}
#Bean
public KafkaTemplate<String, Object> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
Message on Kafka:
{"payload":{"username":"john"},"metadata":{"eventName":"Login","sessionId":"089acf50-00bd-47c9-8e49-dc800c1daf50","username":"john","hasSent":null,"createDate":1511186145471}
Since I'm just setting the key/value serializers I would've expected to be able to do this within the application.yml properties file, rather than doing it through the code.
However, when the yml is updated to specify the serializers it's not working as I would expect i.e. it's not generating the same message as the producer configured in Java (above):
spring:
profiles: local
cloud:
stream:
bindings:
session:
destination: session
contentType: application/json
kafka:
binder:
brokers: localhost
zkNodes: localhost
defaultZkPort: 2181
defaultBrokerPort: 9092
bindings:
session:
producer:
configuration:
value:
serializer: org.springframework.kafka.support.serializer.JsonSerializer
key:
serializer: org.apache.kafka.common.serialization.StringSerializer
Message on Kafka:
"/wILY29udGVudFR5cGUAAAAMInRleHQvcGxhaW4iE29yaWdpbmFsQ29udGVudFR5cGUAAAAgImFwcGxpY2F0aW9uL2pzb247Y2hhcnNldD1VVEYtOCJ7InBheWxvYWQiOnsidXNlcm5hbWUiOiJqb2huIn0sIm1ldGFkYXRhIjp7ImV2ZW50TmFtZSI6IkxvZ2luIiwic2Vzc2lvbklkIjoiNGI3YTBiZGEtOWQwZS00Nzg5LTg3NTQtMTQyNDUwYjczMThlIiwidXNlcm5hbWUiOiJqb2huIiwiaGFzU2VudCI6bnVsbCwiY3JlYXRlRGF0ZSI6MTUxMTE4NjI2NDk4OSwidmVyc2lvbiI6bnVsbH19"
Should it be possible to configure this solely through the application yml? Are there additional settings that are missing?
Credit to #Gary for the answer above!
For completeness, the configuration which is now working for me is below.
spring:
profiles: local
cloud:
stream:
bindings:
session:
producer:
useNativeEncoding: true
destination: session
contentType: application/json
kafka:
binder:
brokers: localhost
zkNodes: localhost
defaultZkPort: 2181
defaultBrokerPort: 9092
bindings:
session:
producer:
configuration:
value:
serializer: org.springframework.kafka.support.serializer.JsonSerializer
key:
serializer: org.apache.kafka.common.serialization.StringSerializer
See headerMode and useNativeEncoding in the producer properties (....session.producer.useNativeEncoding).
headerMode
When set to raw, disables header embedding on output. Effective only for messaging middleware that does not support message headers natively and requires header embedding. Useful when producing data for non-Spring Cloud Stream applications.
Default: embeddedHeaders.
useNativeEncoding
When set to true, the outbound message is serialized directly by client library, which must be configured correspondingly (e.g. setting an appropriate Kafka producer value serializer). When this configuration is being used, the outbound message marshalling is not based on the contentType of the binding. When native encoding is used, it is the responsibility of the consumer to use appropriate decoder (ex: Kafka consumer value de-serializer) to deserialize the inbound message. Also, when native encoding/decoding is used the headerMode property is ignored and headers will not be embedded into the message.
Default: false.
Now, spring.kafka.producer.value-serializer property can be used
yml:
spring:
kafka:
producer:
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
properties:
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer

Resources