Spring-cloud kafka stream schema registry - spring-boot

I am trying to transform with functionnal programming (and spring cloud stream) an input AVRO message from an input topic, and publish a new message on an output topic.
Here is my transform function :
#Bean
public Function<KStream<String, Data>, KStream<String, Double>> evenNumberSquareProcessor() {
return kStream -> kStream.transform(() -> new CustomProcessor(STORE_NAME), STORE_NAME);
}
The CustomProcessor is a class that implements the "Transformer" interface.
I have tried the transformation with non AVRO input and it works fine.
My difficulties is how to declare the schema registry in the application.yaml file or in the the spring application.
I have tried a lot of different configurations (it seems difficult to find the right documentation) and each time the application don't find the settings for the schema.registry.url. I have the following error :
Error creating bean with name 'kafkaStreamsFunctionProcessorInvoker':
Invocation of init method failed; nested exception is
java.lang.IllegalStateException:
org.apache.kafka.common.config.ConfigException: Missing required
configuration "schema.registry.url" which has no default value.
Here is my application.yml file :
spring:
cloud:
stream:
function:
definition: evenNumberSquareProcessor
bindings:
evenNumberSquareProcessor-in-0:
destination: input
content-type: application/*+avro
group: group-1
evenNumberSquareProcessor-out-0:
destination: output
kafka:
binder:
brokers: my-cluster-kafka-bootstrap.kafka:9092
consumer-properties:
value.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
schema.registry.url: http://localhost:8081
I have tried this configuration too :
spring:
cloud:
stream:
kafka:
streams:
binder:
brokers: my-cluster-kafka-bootstrap.kafka:9092
configuration:
schema.registry.url: http://localhost:8081
default.value.serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
bindings:
evenNumberSquareProcessor-in-0:
consumer:
destination: input
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
evenNumberSquareProcessor-out-0:
destination: output
My spring boot application is declared in this way, with the activation of the schema registry client :
#EnableSchemaRegistryClient
#SpringBootApplication
public class TransformApplication {
public static void main(String[] args) {
SpringApplication.run(TransformApplication.class, args);
}
}
Thanks for any help you could bring to me.
Regards
CG

Configure the schema registry under the configuration then it will be available to all binders. By the way. The avro serializer is under the bindings and the specific channel. If you want use the default property default.value.serde:. Your Serde might be the wrong too.
spring:
cloud:
stream:
kafka:
streams:
binder:
brokers: localhost:9092
configuration:
schema.registry.url: http://localhost:8081
default.value.serde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
bindings:
process-in-0:
consumer:
valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
Don't use the #EnableSchemaRegistryClient. Enable the schema registry on the Avro Serde. In this example, I am using the bean Data of your definition. Try to follow this example here.
#Service
public class CustomSerdes extends Serdes {
private final static Map<String, String> serdeConfig = Stream.of(
new AbstractMap.SimpleEntry<>(SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8081"))
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
public static Serde<Data> DataAvro() {
final Serde<Data> dataAvroSerde = new SpecificAvroSerde<>();
dataAvroSerde.configure(serdeConfig, false);
return dataAvroSerde;
}
}

Related

Spring Cloud Stream [2021.0.5] Kafka Batch mode Avro native encoding doesn't work with spring cloud sleuth

i'm working on upgrading spring boot to 2.7.8 and spring cloud to 2021.0.5.
I have Spring cloud stream kafka consumer using avro deserialization in batch-mode, and I was trying to use useNativeEncoding according to documentation.
the problem is when using an input of Message<List> the spring cloud stream code overrides (when using sleuth) the flag of native encoding to false in this class SimpleFunctionRegistry, this the message payload is empty.
without using the Message> it works fine, i.e. List.
after spending more than one day trying to debug the problem without understanding why, I took it to a side project to test it, and it stopped working after using sleuth.
The Bug
the problem is one the class SimpleFunctionRegistry on methodprivate FunctionInvocationWrapper wrapInAroundAdviceIfNecessary(FunctionInvocationWrapper function) it calls the apply and override the flag
spring cloud stream team is there any workaround? or an easy fix?
application.yaml example
spring:
cloud:
stream:
binders:
kafka-string-avro-native:
type: kafka
defaultCandidate: true
environment.spring.cloud.stream.kafka.binder.consumerProperties:
dlqProducerProperties.configuration.key.serializer: org.apache.kafka.common.serialization.StringSerializer
dlqProducerProperties.configuration.value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
value.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
schema.registry.url: ${SCHEMA_REGISTRY_URL:http://0.0.0.0:55013}
specific.avro.reader: true
useNativeDecoding: true
bindings:
revenueEventConsumer-in-0:
binder: kafka-string-avro-native
destination: email.campaign_revenue_events
group: test-4
consumer:
concurrency: 1
batch-mode: true
use-native-decoding: true
function:
definition: revenueEventConsumer
kafka:
binder:
brokers: 0.0.0.0:55008
i found a workaround for the issue by overriding the Bean TraceFunctionAroundWrapper and overriding the setSkipInputConversion(true)
see code below
#Bean
#Primary
TraceFunctionAroundWrapper customTraceFunctionAroundWrapper(Environment environment, Tracer tracer, Propagator propagator,
Propagator.Setter<MessageHeaderAccessor> injector, Propagator.Getter<MessageHeaderAccessor> extractor,
ObjectProvider<List<FunctionMessageSpanCustomizer>> customizers) {
return new CustomTraceFunctionAroundWrapper(environment, tracer, propagator, injector, extractor,
customizers.getIfAvailable(ArrayList::new));
}
public class CustomTraceFunctionAroundWrapper extends TraceFunctionAroundWrapper {
public CustomTraceFunctionAroundWrapper(Environment environment, Tracer tracer,
Propagator propagator,
Propagator.Setter<MessageHeaderAccessor> injector,
Propagator.Getter<MessageHeaderAccessor> extractor) {
super(environment, tracer, propagator, injector, extractor);
}
public CustomTraceFunctionAroundWrapper(Environment environment, Tracer tracer, Propagator propagator, Propagator.Setter<MessageHeaderAccessor> injector,
Propagator.Getter<MessageHeaderAccessor> extractor,
List<FunctionMessageSpanCustomizer> customizers) {
super(environment, tracer, propagator, injector, extractor, customizers);
}
#Override
protected Object doApply(Object message, SimpleFunctionRegistry.FunctionInvocationWrapper targetFunction) {
targetFunction.setSkipInputConversion(true);
return super.doApply(message, targetFunction);
}
}
this is only a workaround until the bug is fixed is spring cloud stream and sleuth

Exception in thread "WordListenerService-process-applicationId-3e5d92bf-f503-4488-b367-d18deb1940c8-StreamThread-1" java.lang.UnsatisfiedLinkError:

I'm working on Spring Boot and Apache Kafka using Spring Cloud Stream. While running the code facing the below error.
Exception in thread "WordListenerService-process-applicationId-3e5d92bf-f503-4488-b367-d18deb1940c8-StreamThread-1" java.lang.UnsatisfiedLinkError: /private/var/folders/g6/9m624n45627541g0xj6kssmw0000gn/T/librocksdbjni6807493407431942629.jnilib: dlopen(/private/var/folders/g6/9m624n45627541g0xj6kssmw0000gn/T/librocksdbjni6807493407431942629.jnilib, 0x0001): tried: '/private/var/folders/g6/9m624n45627541g0xj6kssmw0000gn/T/librocksdbjni6807493407431942629.jnilib' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')), '/usr/lib/librocksdbjni6807493407431942629.jnilib' (no such file)
at java.base/java.lang.ClassLoader$NativeLibrary.load0(Native Method)
at java.base/java.lang.ClassLoader$NativeLibrary.load(ClassLoader.java:2452)
at java.base/java.lang.ClassLoader$NativeLibrary.loadLibrary(ClassLoader.java:2508)
at java.base/java.lang.ClassLoader.loadLibrary0(ClassLoader.java:2704)
at java.base/java.lang.ClassLoader.loadLibrary(ClassLoader.java:2637)
at java.base/java.lang.Runtime.load0(Runtime.java:745)
at java.base/java.lang.System.load(System.java:1873)
at org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:78)
at org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:56)
at org.rocksdb.RocksDB.loadLibrary(RocksDB.java:64)
at org.rocksdb.RocksDB.<clinit>(RocksDB.java:35)
at org.rocksdb.DBOptions.<clinit>(DBOptions.java:21)
at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:130)
at org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:224)
at org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:48)
at org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.init(ChangeLoggingKeyValueBytesStore.java:42)
at org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:48)
at org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:61)
at org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:48)
at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.lambda$init$0(MeteredKeyValueStore.java:101)
at org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:836)
at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:101)
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.registerStateStores(ProcessorStateManager.java:199)
at org.apache.kafka.streams.processor.internals.StateManagerUtil.registerStateStores(StateManagerUtil.java:76)
at org.apache.kafka.streams.processor.internals.StreamTask.initializeIfNeeded(StreamTask.java:211)
at org.apache.kafka.streams.processor.internals.TaskManager.tryToCompleteRestoration(TaskManager.java:426)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:660)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:551)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:510)
9:39:59 PM: Execution finished ':StreamingAggregatesApplication.main()'.
WordListeningBinders.java
public interface WordListenerBinding {
#Input("words-input-channel")
KStream<String, String> wordsInputStream();
}
WordListenerService.java
#Service
#Log4j2
#EnableBinding(WordListenerBinding.class)
public class WordListenerService {
#StreamListener("words-input-channel")
public void process(KStream<String, String> input) {
KStream<String, String> wordStream = input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split(" ")));
wordStream.groupBy((key, value) -> value)
.count()
.toStream()
.peek((k, v) -> log.info("Word: {} Count: {}", k, v));
}
}
application.yml
spring:
cloud:
stream:
bindings:
words-input-channel:
destination: streaming-words-topic
kafka:
streams:
binder:
brokers: localhost:9092
configuration:
commit.interval.ms: 10000
state.dir: state-store
default:
key:
serde: org.apache.kafka.common.serialization.Serdes$StringSerde
value:
serde: org.apache.kafka.common.serialization.Serdes$StringSerde
Judging from the stack trace this error has nothing to do with Spring or Spring Cloud Stream. Also, it seems pretty common and basic search renders many answers (e.g., issue with version of java libraries etc. . .). Here is an example of just one of such answers - KAFKA STREAM: UnsatisfiedLinkError on Lib Rocks DB
Additionally, I see that you are using #StreamListener. The annotation-based programming model has been deprecated for several years and is being removed from the next version of Spring Cloud Stream, so please address it following the functional programming model

Spring Cloud Stream Kafka Multiple Binding

I am using Spring Cloud Stream Kafka binder to consume messages from Kafka. I am able to make my sample work with a single Kafka Binder as below
spring:
cloud:
stream:
kafka:
binder:
consumer-properties: {enable.auto.commit: true}
auto-create-topics: false
brokers: <broker url>
bindings:
consumer:
destination: some-topic
group: testconsumergroup
consumer:
concurrency: 1
valueSerde: JsonSerde
producer:
destination: some-other-topic
producer:
valueSerde: JsonSerde
Note that both the bindings are to the same Kafka Broker here. However, I have a situation where I need to publish to a topic in some Kafka Cluster and also consume from another topic in a different Kafka Cluster. How should I change my configuration to be able to bind to different Kafka Clusters?
I tried something like this
spring:
cloud:
stream:
binders:
defaultbinder:
type: kafka
environment:
spring.cloud.stream.kafka.streams.binder.brokers: <cluster1-brokers>
kafka1:
type: kafka
environment:
spring.cloud.stream.kafka.streams.binder.brokers: <cluster2-brokers>
bindings:
consumer:
binder: kafka1
destination: some-topic
group: testconsumergroup
consumer:
concurrency: 1
valueSerde: JsonSerde
producer:
binder: defaultbinder
destination: some-topic
producer:
valueSerde: JsonSerde
kafka:
binder:
consumer-properties: {enable.auto.commit: true}
auto-create-topics: false
brokers: <cluster1-brokers>
and
spring:
cloud:
stream:
binders:
defaultbinder:
type: kafka
environment:
spring.cloud.stream.kafka.streams.binder.brokers: <cluster1-brokers>
kafka1:
type: kafka
environment:
spring.cloud.stream.kafka.streams.binder.brokers: <cluster2-brokers>
kafka:
bindings:
consumer:
binder: kafka1
destination: some-topic
group: testconsumergroup
consumer:
concurrency: 1
valueSerde: JsonSerde
producer:
binder: defaultbinder
destination: some-topic
producer:
valueSerde: JsonSerde
kafka:
binder:
consumer-properties: {enable.auto.commit: true}
auto-create-topics: false
brokers: <cluster1-brokers>
But both of them dont seem to work.
The first configuration seems to be invalid.
For the second configuration I get the below error
Caused by: java.lang.IllegalStateException: A default binder has been requested, but there is more than one binder available for 'org.springframework.cloud.stream.messaging.DirectWithAttributesChannel' : kafka1,defaultbinder, and no default binder has been set.
I am using the dependency 'org.springframework.cloud:spring-cloud-starter-stream-kafka:3.0.1.RELEASE' and Spring Boot 2.2.6
Please let me know how to configure multiple bindings for Kafka using Spring Cloud Stream
Update
Tried this configuration below
spring:
cloud:
stream:
binders:
kafka2:
type: kafka
environment:
spring.cloud.stream.kafka.binder.brokers: <cluster2-brokers>
kafka1:
type: kafka
environment:
spring.cloud.stream.kafka.binder.brokers: <cluster1-brokers>
bindings:
consumer:
destination: <some-topic>
binder: kafka1
group: testconsumergroup
content-type: application/json
nativeEncoding: true
consumer:
concurrency: 1
valueSerde: JsonSerde
producer:
destination: some-topic
binder: kafka2
contentType: application/json
nativeEncoding: true
producer:
valueSerde: JsonSerde
The Message Streams and EventHubBinding is as follows
public interface MessageStreams {
String PRODUCER = "producer";
String CONSUMER = "consumer;
#Output(PRODUCER)
MessageChannel producerChannel();
#Input(CONSUMER)
SubscribableChannel consumerChannel()
}
#EnableBinding(MessageStreams.class)
public class EventHubStreamsConfiguration {
}
My Producer class looks like below
#Component
#Slf4j
public class EventPublisher {
private final MessageStreams messageStreams;
public EventPublisher(MessageStreams messageStreams) {
this.messageStreams = messageStreams;
}
public boolean publish(CustomMessage event) {
MessageChannel messageChannel = getChannel();
MessageBuilder messageBuilder = MessageBuilder.withPayload(event);
boolean messageSent = messageChannel.send(messageBuilder.build());
return messageSent;
}
protected MessageChannel getChannel() {
return messageStreams.producerChannel();
}
}
And Consumer class looks like below
#Component
#Slf4j
public class EventHandler {
private final MessageStreams messageStreams;
public EventHandler(MessageStreams messageStreams) {
this.messageStreams = messageStreams;
}
#StreamListener(MessageStreams.CONSUMER)
public void handleEvent(Message<CustomMessage> message) throws Exception
{
// process the event
}
#Override
#ServiceActivator(inputChannel = "some-topic.testconsumergroup.errors")
protected void handleError(ErrorMessage errorMessage) throws Exception {
// handle error;
}
}
I am getting the below error while trying to publish and consume the messages from my test.
Dispatcher has no subscribers for channel 'application.producer'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage [payload=byte[104], headers={contentType=application/json, timestamp=1593517340422}]
Am i missing anything? For a single cluster, i am able to publish and consume messages. The issue is only happening with multiple cluster bindings

How to Shutdown/Stop RabbitMQ queue of Spring Cloud stream bindings

I want to stop the consumers in rabbitmq of creating a queue from spring cloud stream bindings while hitting endpoint /prepare-for-shutdown. Please find below the configuration,
Added dependency in pom.xml
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-rabbit</artifactId>
</dependency>
Application.yml:
spring:
cloud:
stream:
bindings:
produceChannel:
binder: rabbit
content-type: application/json
destination: internal-exchange
consumeChannel:
binder: rabbit
content-type: application/json
destination: internal-exchange
group: small-queue
rabbit:
bindings:
consumeChannel:
consumer:
autoBindDlq: true
durableSubscription: true
requeueRejected: false
republishToDlq: true
bindingRoutingKey: admin
produceChannel:
producer:
routingKeyExpression: '"admin"'
sample.java
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.messaging.SubscribableChannel;
public interface Sample{
#Input("consumeChannel")
SubscribableChannel consumeChannel();
#Output("produceChannel")
SubscribableChannel produceChannel();
}
The integration with RabbitMQ has been achieved using Spring Cloud's #StreamLinster and #EnableBinding abstractions as shown below:
#EnableBinding(Sample.class)
#StreamListener("consumeChannel")
public void sampleMessage(String message) {
// code
}
Looking forward to stop a consumer of a RabbitMQ queue programmatically.
Thanks in Advance
I analyzed the issue why am getting empty values by invoking the actuator endpoint '/actuator/bindings'
When hitting actuator binding endpoint, it invokes the method gatherInputBindings()in BindingsEndpoint.class.
In BindingsEndpoint.java, fetching the binding values from inputBindingLifecycle
(Collection<Binding<?>>) new DirectFieldAccessor(inputBindingLifecycle).getPropertyValue("inputBindings");
In below methods, setting empty bindings list to inputBindings
In InputBindingLifecycle.java,
void doStartWithBindable(Bindable bindable) {
this.inputBindings = bindable.createAndBindInputs(bindingService);
}
In Bindable.java,
default Collection<Binding<Object>> createAndBindInputs(BindingService adapter) {
return Collections.<Binding<Object>>emptyList();
}
Pls suggest me to fix these issues whether need to change any dependency or any code configuration

Kafka producer JSON serialization

I'm trying to use Spring Cloud Stream to integrate with Kafka. The message being written is a Java POJO and while it works as expected (the message is being written to the topic and I can read off with a consumer app), there are some unknown characters being added to the start of the message which are causing trouble when trying to integrate Kafka Connect to sink the messages from the topic.
With the default setup this is the message being pushed to Kafka:
 contentType "text/plain"originalContentType "application/json;charset=UTF-8"{"payload":{"username":"john"},"metadata":{"eventName":"Login","sessionId":"089acf50-00bd-47c9-8e49-dc800c1daf50","username":"john","hasSent":null,"createDate":1511186145471,"version":null}}
If I configure the Kafka producer within the Java app then the message is written to the topic without the leading characters / headers:
#Configuration
public class KafkaProducerConfig {
#Bean
public ProducerFactory<String, Object> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
"localhost:9092");
configProps.put(
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
configProps.put(
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
JsonSerializer.class);
return new DefaultKafkaProducerFactory<String, Object>(configProps);
}
#Bean
public KafkaTemplate<String, Object> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
Message on Kafka:
{"payload":{"username":"john"},"metadata":{"eventName":"Login","sessionId":"089acf50-00bd-47c9-8e49-dc800c1daf50","username":"john","hasSent":null,"createDate":1511186145471}
Since I'm just setting the key/value serializers I would've expected to be able to do this within the application.yml properties file, rather than doing it through the code.
However, when the yml is updated to specify the serializers it's not working as I would expect i.e. it's not generating the same message as the producer configured in Java (above):
spring:
profiles: local
cloud:
stream:
bindings:
session:
destination: session
contentType: application/json
kafka:
binder:
brokers: localhost
zkNodes: localhost
defaultZkPort: 2181
defaultBrokerPort: 9092
bindings:
session:
producer:
configuration:
value:
serializer: org.springframework.kafka.support.serializer.JsonSerializer
key:
serializer: org.apache.kafka.common.serialization.StringSerializer
Message on Kafka:
"/wILY29udGVudFR5cGUAAAAMInRleHQvcGxhaW4iE29yaWdpbmFsQ29udGVudFR5cGUAAAAgImFwcGxpY2F0aW9uL2pzb247Y2hhcnNldD1VVEYtOCJ7InBheWxvYWQiOnsidXNlcm5hbWUiOiJqb2huIn0sIm1ldGFkYXRhIjp7ImV2ZW50TmFtZSI6IkxvZ2luIiwic2Vzc2lvbklkIjoiNGI3YTBiZGEtOWQwZS00Nzg5LTg3NTQtMTQyNDUwYjczMThlIiwidXNlcm5hbWUiOiJqb2huIiwiaGFzU2VudCI6bnVsbCwiY3JlYXRlRGF0ZSI6MTUxMTE4NjI2NDk4OSwidmVyc2lvbiI6bnVsbH19"
Should it be possible to configure this solely through the application yml? Are there additional settings that are missing?
Credit to #Gary for the answer above!
For completeness, the configuration which is now working for me is below.
spring:
profiles: local
cloud:
stream:
bindings:
session:
producer:
useNativeEncoding: true
destination: session
contentType: application/json
kafka:
binder:
brokers: localhost
zkNodes: localhost
defaultZkPort: 2181
defaultBrokerPort: 9092
bindings:
session:
producer:
configuration:
value:
serializer: org.springframework.kafka.support.serializer.JsonSerializer
key:
serializer: org.apache.kafka.common.serialization.StringSerializer
See headerMode and useNativeEncoding in the producer properties (....session.producer.useNativeEncoding).
headerMode
When set to raw, disables header embedding on output. Effective only for messaging middleware that does not support message headers natively and requires header embedding. Useful when producing data for non-Spring Cloud Stream applications.
Default: embeddedHeaders.
useNativeEncoding
When set to true, the outbound message is serialized directly by client library, which must be configured correspondingly (e.g. setting an appropriate Kafka producer value serializer). When this configuration is being used, the outbound message marshalling is not based on the contentType of the binding. When native encoding is used, it is the responsibility of the consumer to use appropriate decoder (ex: Kafka consumer value de-serializer) to deserialize the inbound message. Also, when native encoding/decoding is used the headerMode property is ignored and headers will not be embedded into the message.
Default: false.
Now, spring.kafka.producer.value-serializer property can be used
yml:
spring:
kafka:
producer:
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
properties:
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer

Resources