Connecting to multiple messaging systems in spring cloud stream application using configuration class - spring-boot

I have got a requirement to read all the spring.cloud.stream.kafka.binder from a cloud configuration service. I was able to move the bindings to separate configuration class by defining a #BindingServiceProperties bean. Also was able to move one Kafka system properties to a configuration class by defining a #KafkaBinderConfigurationProperties. If both the Kafka server properties are moved in as binder configuration, the application is unable to identify the binder and respective bindings. Need help in defining multiple binders in a configuration class and attach each binders to respective bindings.
application.yml
spring:
cloud:
stream:
defaultBinder: kafka1
binders:
kafka1:
type: kafka
environment:
spring.cloud.stream.kafka.binder:
brokers: localhost:9095
autoCreateTopics: true
configuration:
auto.offset.reset: latest
kafka2:
type: kafka
environment:
spring.cloud.stream.kafka.binder:
brokers: localhost:9096
autoCreateTopics: true
configuration:
auto.offset.reset: latest
Using the application.yml configuration I'm able to attach the binder to respective bindings.
BinderConfiguration
#Primary
#Bean
public BindingServiceProperties bindingServiceProperties() {
BindingServiceProperties bindingServiceProperties = new BindingServiceProperties();
BindingProperties input1BindingProps = getInput1BindingProperties();
BindingProperties input2BindingProps = getInput2BindingProperties();
Map<String, BindingProperties> bindingProperties = new HashMap<>();
bindingProperties.put(StreamBindings.INPUT_1, input1BindingProps);
bindingProperties.put(StreamBindings.INPUT_2, input2BindingProps);
bindingServiceProperties.setBindings(bindingProperties);
return bindingServiceProperties;
}
private BindingProperties getInput1BindingProperties() {
ConsumerProperties consumerProperties = new ConsumerProperties();
consumerProperties.setMaxAttempts(1);
consumerProperties.setDefaultRetryable(false);
BindingProperties props = new BindingProperties();
props.setDestination("test1");
props.setContentType(MediaType.APPLICATION_JSON_VALUE);
props.setGroup("test-group");
props.setConsumer(consumerProperties);
props.setBinder("kafka1");
return props;
}
private BindingProperties getInput2BindingProperties() {
ConsumerProperties consumerProperties = new ConsumerProperties();
consumerProperties.setMaxAttempts(1);
consumerProperties.setDefaultRetryable(false);
BindingProperties props = new BindingProperties();
props.setDestination("test2");
props.setContentType(MediaType.APPLICATION_JSON_VALUE);
props.setGroup("test-group");
props.setConsumer(consumerProperties);
props.setBinder("kafka2");
return props;
}
KafkaBinderConfigurationProperties
#Bean
public KafkaBinderConfigurationProperties getKafka1BinderProps(){
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties
= new KafkaBinderConfigurationProperties(new KafkaProperties());
String brokers = "localhost:9095";
kafkaBinderConfigurationProperties.setBrokers(brokers);
kafkaBinderConfigurationProperties.setDefaultBrokerPort("9095");
Map<String, String> configuration = new HashMap<>();
configuration.put("auto.offset.reset", "latest");
kafkaBinderConfigurationProperties.setConfiguration(configuration);
return kafkaBinderConfigurationProperties;
}
#Bean
public KafkaBinderConfigurationProperties getKafka2BinderProps(){
KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties
= new KafkaBinderConfigurationProperties(new KafkaProperties());
String brokers = "localhost:9096";
kafkaBinderConfigurationProperties.setBrokers(brokers);
kafkaBinderConfigurationProperties.setDefaultBrokerPort("9096");
Map<String, String> configuration = new HashMap<>();
configuration.put("auto.offset.reset", "latest");
kafkaBinderConfigurationProperties.setConfiguration(configuration);
return kafkaBinderConfigurationProperties;
}
Application is working fine with the properties defined in application.yml without moving binder configuration into separate configuration class.
Complete code can be found in this repository.
Any help is appreciated.

Related

unable to disable topics auto create in spring kafka v 1.1.6

Im using springboot v1.5 and spring kafka v1.1.6 to publish a message to a kafka broker.
when it publishes the message to the topic, the topic is created in the broker by default if not present.
I do not want it to create topics if not present. I tried to disable it by adding the property spring.kafka.topic.properties.auto.create=false but it does not work.
below is my bean configuration
#Value("${kpi.kafka.bootstrap-servers}")
private String bootstrapServer;
#Bean
public ProducerFactory<String, CmsMonitoringMetrics> producerFactoryJson() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
configProps.put("allow.auto.create.topics", "false");
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, CmsMonitoringMetrics> kafkaTemplateJson() {
return new KafkaTemplate<>(producerFactoryJson());
}
in producer method im using the below code to publish
Message<CmsMonitoringMetrics> message = MessageBuilder.withPayload(data)
.setHeader(KafkaHeaders.TOPIC, topicName)
.build();
SendResult<String, CmsMonitoringMetrics> result = kafkaTemplate.send(message).get();
it still creates the topic. please help me disable it.
As per the documentation, auto.create.topics.enable is a broker configuration. That means that you have to set this property on the server side of Kafka, not on producer/consumer clients.

Meter registration fails on Spring Boot Kafka consumer with Prometheus MeterRegistry

I am investigating a bug report in our application (spring boot) regarding the kafka metric kafka.consumer.fetch.manager.records.consumed.total being missing.
The application has two kafka consumers, lets call them query-routing and query-tracking consumers, and they are configured via #KafkaListener annotation and each kafka consumer has it's own instance of ConcurrentKafkaListenerContainerFactory.
The query-router consumer is configured as
#Configuration
#EnableKafka
public class QueryRoutingConfiguration {
#Bean(name = "queryRoutingContainerFactory")
public ConcurrentKafkaListenerContainerFactory<String, RoutingInfo> kafkaListenerContainerFactory(MeterRegistry meterRegistry) {
Map<String, Object> consumerConfigs = new HashMap<>();
// For brevity I removed the configs as they are trivial configs like bootstrap servers and serializers
DefaultKafkaConsumerFactory<String, RoutingInfo> consumerFactory =
new DefaultKafkaConsumerFactory<>(consumerConfigs);
consumerFactory.addListener(new MicrometerConsumerListener<>(meterRegistry));
ConcurrentKafkaListenerContainerFactory<String, RoutingInfo> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
factory.getContainerProperties().setIdleEventInterval(5000L);
return factory;
}
}
And the query-tracking consumer is configured as:
#Configuration
#EnableKafka
public class QueryTrackingConfiguration {
private static final FixedBackOff NO_ATTEMPTS = new FixedBackOff(Duration.ofSeconds(0).toMillis(), 0L);
#Bean(name = "queryTrackingContainerFactory")
public ConcurrentKafkaListenerContainerFactory<String, QueryTrackingMessage> kafkaListenerContainerFactory(MeterRegistry meterRegistry) {
Map<String, Object> consumerConfigs = new HashMap<>();
// For brevity I removed the configs as they are trivial configs like bootstrap servers and serializers
DefaultKafkaConsumerFactory<String, QueryTrackingMessage> consumerFactory =
new DefaultKafkaConsumerFactory<>(consumerConfigs);
consumerFactory.addListener(new MicrometerConsumerListener<>(meterRegistry));
ConcurrentKafkaListenerContainerFactory<String, QueryTrackingMessage> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL);
factory.setBatchListener(true);
DefaultErrorHandler deusErrorHandler = new DefaultErrorHandler(NO_ATTEMPTS);
factory.setCommonErrorHandler(deusErrorHandler);
return factory;
}
}
The MeterRegistryConfigurator bean configuaration is set as:
#Configuration
public class MeterRegistryConfigurator {
private static final Logger LOG = LoggerFactory.getLogger(MeterRegistryConfigurator.class);
private static final String PREFIX = "dps";
#Bean
MeterRegistryCustomizer<MeterRegistry> meterRegistryCustomizer() {
return registry -> registry.config()
.onMeterAdded(meter -> LOG.info("onMeterAdded: {}", meter.getId().getName()))
.onMeterRemoved(meter -> LOG.info("onMeterRemoved: {}", meter.getId().getName()))
.onMeterRegistrationFailed(
(id, s) -> LOG.info("onMeterRegistrationFailed - id '{}' value '{}'", id.getName(), s))
.meterFilter(PrefixMetricFilter.withPrefix(PREFIX))
.meterFilter(
MeterFilter.deny(id ->
id.getName().startsWith(PREFIX + ".jvm")
|| id.getName().startsWith(PREFIX + ".system")
|| id.getName().startsWith(PREFIX + ".process")
|| id.getName().startsWith(PREFIX + ".logback")
|| id.getName().startsWith(PREFIX + ".tomcat"))
)
.meterFilter(MeterFilter.ignoreTags("host", "host.name"))
.namingConvention(NamingConvention.snakeCase);
}
}
The #KafkaListener for each consumer is set as
#KafkaListener(
id = "query-routing",
idIsGroup = true,
topics = "${query-routing.consumer.topic}",
groupId = "${query-routing.consumer.groupId}",
containerFactory = "queryRoutingContainerFactory")
public void listenForMessages(ConsumerRecord<String, RoutingInfo> record) {
// Handle each record ...
}
and
#KafkaListener(
id = "query-tracking",
idIsGroup = true,
topics = "${query-tracking.consumer.topic}",
groupId = "${query-tracking.consumer.groupId}",
containerFactory = "queryTrackingContainerFactory"
)
public void listenForMessages(List<ConsumerRecord<String, QueryTrackingMessage>> consumerRecords, Acknowledgment ack) {
// Handle each record ...
}
When the application starts up, going to the actuator/prometheus endpoing I can see the metric for both consumers:
# HELP dps_kafka_consumer_fetch_manager_records_consumed_total The total number of records consumed
# TYPE dps_kafka_consumer_fetch_manager_records_consumed_total counter
dps_kafka_consumer_fetch_manager_records_consumed_total{client_id="consumer-qf-query-tracking-consumer-1",kafka_version="3.1.2",spring_id="not.managed.by.Spring.consumer-qf-query-tracking-consumer-1",} 7.0
dps_kafka_consumer_fetch_manager_records_consumed_total{client_id="consumer-QF-Routing-f5d0d9f1-e261-407b-954d-5d217211dee0-2",kafka_version="3.1.2",spring_id="not.managed.by.Spring.consumer-QF-Routing-f5d0d9f1-e261-407b-954d-5d217211dee0-2",} 0.0
But a few seconds later there is a new call to io.micrometer.core.instrument.binder.kafka.KafkaMetrics#checkAndBindMetrics which will remove a set of metrics (including kafka.consumer.fetch.manager.records.consumed.total)
onMeterRegistrationFailed - dps.kafka.consumer.fetch.manager.records.consumed.total string Prometheus requires that all meters with the same name have the same set of tag keys. There is already an existing meter named 'dps.kafka.consumer.fetch.manager.records.consumed.total' containing tag keys [client_id, kafka_version, spring_id]. The meter you are attempting to register has keys [client_id, kafka_version, spring_id, topic].
Going again to actuator/prometheus will only show the metric for the query-routing consumer:
# HELP deus_dps_persistence_kafka_consumer_fetch_manager_records_consumed_total The total number of records consumed for a topic
# TYPE deus_dps_persistence_kafka_consumer_fetch_manager_records_consumed_total counter
deus_dps_persistence_kafka_consumer_fetch_manager_records_consumed_total{client_id="consumer-QF-Routing-0a739a21-4764-411a-9cc6-0e60293b40b4-2",kafka_version="3.1.2",spring_id="not.managed.by.Spring.consumer-QF-Routing-0a739a21-4764-411a-9cc6-0e60293b40b4-2",theKey="routing",topic="QF_query_routing_v1",} 0.0
As you can see above the metric for the query-tracking consumer is gone.
As the log says, The meter you are attempting to register has keys [client_id, kafka_version, spring_id, topic]. The issue is I cannot find where is this metric with a topic key being registered which will trigger io.micrometer.core.instrument.binder.kafka.KafkaMetrics#checkAndBindMetrics which will remove the metric for the query-tracking consumer.
I am using
micrometer-registry-prometheus version 1.9.5
spring boot version 2.7.5
spring kafka (org.springframework.kafka:spring-kafka)
My question is, why does the metric kafka.consumer.fetch.manager.records.consumed.total fails causing it to be removed for the query-tracking consumer and how can I fix it?
I believe this is internal in Micrometer KafkaMetrics.
Periodically, it checks for new metrics; presumably, the topic one shows up after the consumer subscribes to the topic.
#Override
public void bindTo(MeterRegistry registry) {
this.registry = registry;
commonTags = getCommonTags(registry);
prepareToBindMetrics(registry);
checkAndBindMetrics(registry);
VVVVVVVVVVVVVVVVVVVVVVVVVVVVVV
scheduler.scheduleAtFixedRate(() -> checkAndBindMetrics(registry), getRefreshIntervalInMillis(),
getRefreshIntervalInMillis(), TimeUnit.MILLISECONDS);
}
You should be able to write a filter to exclude the one with fewer tags.

java.lang.IllegalArgumentException: A consumerFactory is required

This configuration existed before my changes:
#Configuration
public class KafkaConfiguration {
#Value(value = "${kafka.bootstrapServers:XXX}")
private String bootstrapServers;
#Value(value = "${kafka.connect.url:XXX}")
public String kafkaConnectUrl;
#Bean
public AdminClient adminClient() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
return AdminClient.create(configs);
}
#Bean
public WebProperties.Resources resources() {
return new WebProperties.Resources();
}
#Bean
public KafkaConnectClient connectClient() {
return new KafkaConnectClient(new org.sourcelab.kafka.connect.apiclient.Configuration(kafkaConnectUrl));
}
}
I've setup the app so that I can use KafkaTemaplate:
application.yaml
spring:
profiles:
active: dev
kafka:
bootstrap-servers: XXX
properties:
schema.registry.url: XXX
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: io.confluent.kafka.serializers.protobuf.KafkaProtobufSerializer
consumer:
group-id: kafka-management-service-group-id
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: io.confluent.kafka.serializers.protobuf.KafkaProtobufDeserializer
I can correctly write and read messages using KafkaTemplate from a topic with 1 partition that I created manually:
#GetMapping("/loadData")
public void loadData() {
IntStream.range(0, 1000000).forEach(key ->
{
final General general = General.newBuilder()
.setUuid(StringValue.newBuilder().setValue(String.valueOf(key))
.build()).build();
template.send("test",
String.valueOf(key),
ScheduledJob.newBuilder().setGeneral(general).build()
);
}
);
}
#GetMapping("/readData")
public void readData() {
Collection<TopicPartitionOffset> topicPartitionOffsets = new ArrayList<>();
final TopicPartitionOffset topicPartitionOffset = new TopicPartitionOffset("test", 0, SeekPosition.BEGINNING);
topicPartitionOffsets.add(topicPartitionOffset);
final Iterator<ConsumerRecord<String, ScheduledJob>> iterator = template.receive(topicPartitionOffsets).iterator();
while (iterator.hasNext()) {
final ConsumerRecord<String, ScheduledJob> next = iterator.next();
System.out.println("Key = "+next.key());
System.out.println("Value = "+next.value());
System.out.println();
}
}
Current task: I'd like to read all messages of a topic from a specific partition X (therefore I need the number of partitions), from a specific offet Y to a specific offset Z; and ignore the rest.
I've read that I can do something along these lines to get the partition number for a specific topic:
#Autowired
private final ConsumerFactory<Object, Object> consumerFactory;
public String[] partitions(String topic) {
try (Consumer<Object, Object> consumer = consumerFactory.createConsumer()) {
return consumer.partitionsFor(topic).stream()
.map(pi -> "" + pi.partition())
.toArray(String[]::new);
}
}
Problem: Despite the fact that I can read and write messages, a ConsumerFactory isn't created automatically.
Context: Our frontend needs to support paging, multi-column order and filtering. The source are Kafka messages from a generic topic with any number of partitions. Maybe this isn't the best way to do it, therefore I'm also open for suggestions
I don't see how it is possible to receive records without adding a consumer factory to the template; Boot does not do that. See KafkaTemplate.oneOnly(), which is called by the receive methods...
private Properties oneOnly() {
Assert.notNull(this.consumerFactory, "A consumerFactory is required");
Properties props = new Properties();
props.setProperty(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "1");
return props;
}
Boot auto configures a consumer factory, but it does not add it to the template because using template to receive is an uncommon use case.
/**
* Set a consumer factory for receive operations.
* #param consumerFactory the consumer factory.
* #since 2.8
*/
public void setConsumerFactory(ConsumerFactory<K, V> consumerFactory) {
this.consumerFactory = consumerFactory;
}

Spring integration properties for Kafka

While trying to use listener config properties in application.yml, I am facing an issue where the KafkaListener annotated method is not invoked at all if I use the application.yml config(listener.type= batch). It only gets invoked when I explicitly set setBatchListener to true in code. Here is my code and configuration.
Consumer code:
#KafkaListener(containerFactory = "kafkaListenerContainerFactory",
topics = "${spring.kafka.template.default-topic}",
groupId = "${spring.kafka.consumer.group-id}")
public void receive(List<ConsumerRecord<String,byte[]>> consumerRecords,Acknowledgment acknowledgment){
processor.process(consumerRecords,acknowledgment);
}
application.yml:
listener:
missing-topics-fatal: false
type: batch
ack-mode: manual
Consumer configuration:
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(
new DefaultKafkaConsumerFactory<>(kafkaProperties.buildConsumerProperties()));
factory.setErrorHandler(new SeekToCurrentErrorHandler( new UpdateMessageErrorHandler(),new FixedBackOff(idleEventInterval,maxFailures)));
final ContainerProperties properties = factory.getContainerProperties();
properties.setIdleBetweenPolls(idleBetweenPolls);
properties.setIdleEventInterval(idleEventInterval);
return factory;
}
If I'm not mistaken, by using the ConcurrentKafkaListenerContainerFactory builder in your configuration you're essentially overriding a piece of code that is usually executed within ConcurrentKafkaListenerContainerFactoryConfigurer class within spring autoconfiguration package:
if (properties.getType().equals(Type.BATCH)) {
factory.setBatchListener(true);
factory.setBatchErrorHandler(this.batchErrorHandler);
} else {
factory.setErrorHandler(this.errorHandler);
}
Since it's hardcoded in your application.yaml file anyway, why is it a bad thing for it to be configured in your #Configuration file?

Spring Kafka Producer not sending to Kafka 1.0.0 (Magic v1 does not support record headers)

I am using this docker-compose setup for setting up Kafka locally: https://github.com/wurstmeister/kafka-docker/
docker-compose up works fine, creating topics via shell works fine.
Now I try to connect to Kafka via spring-kafka:2.1.0.RELEASE
When starting up the Spring application it prints the correct version of Kafka:
o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.0.0
o.a.kafka.common.utils.AppInfoParser : Kafka commitId : aaa7af6d4a11b29d
I try to send a message like this
kafkaTemplate.send("test-topic", UUID.randomUUID().toString(), "test");
Sending on client side fails with
UnknownServerException: The server experienced an unexpected error when processing the request
In the server console I get the message Magic v1 does not support record headers
Error when handling request {replica_id=-1,max_wait_time=100,min_bytes=1,max_bytes=2147483647,topics=[{topic=test-topic,partitions=[{partition=0,fetch_offset=39,max_bytes=1048576}]}]} (kafka.server.KafkaApis)
java.lang.IllegalArgumentException: Magic v1 does not support record headers
Googling suggests a version conflict, but the version seem to fit (org.apache.kafka:kafka-clients:1.0.0 is in the classpath).
Any clues? Thanks!
Edit:
I narrowed down the source of the problem. Sending plain Strings works, but sending Json via JsonSerializer results in the given problem. Here is the content of my producer config:
#Value("\${kafka.bootstrap-servers}")
lateinit var bootstrapServers: String
#Bean
fun producerConfigs(): Map<String, Any> =
HashMap<String, Any>().apply {
// list of host:port pairs used for establishing the initial connections to the Kakfa cluster
put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers)
put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer::class.java)
put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer::class.java)
}
#Bean
fun producerFactory(): ProducerFactory<String, MyClass> =
DefaultKafkaProducerFactory(producerConfigs())
#Bean
fun kafkaTemplate(): KafkaTemplate<String, MyClass> =
KafkaTemplate(producerFactory())
I had a similar issue. Kafka adds headers by default if we use JsonSerializer or JsonSerde for values.
In order to prevent this issue, we need to disable adding info headers.
if you are fine with default json serialization, then use the following (key point here is ADD_TYPE_INFO_HEADERS):
Map<String, Object> props = new HashMap<>(defaultSettings);
props.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
ProducerFactory<String, Object> producerFactory = new DefaultKafkaProducerFactory<>(props);
but if you need a custom JsonSerializer with specific ObjectMapper (like with PropertyNamingStrategy.SNAKE_CASE), you should disable adding info headers explicitly on JsonSerializer, as spring kafka ignores DefaultKafkaProducerFactory's property ADD_TYPE_INFO_HEADERS (as for me it's a bad design of spring kafka)
JsonSerializer<Object> valueSerializer = new JsonSerializer<>(customObjectMapper);
valueSerializer.setAddTypeInfo(false);
ProducerFactory<String, Object> producerFactory = new DefaultKafkaProducerFactory<>(props, Serdes.String().serializer(), valueSerializer);
or if we use JsonSerde, then:
Map<String, Object> jsonSerdeProperties = new HashMap<>();
jsonSerdeProperties.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
JsonSerde<T> jsonSerde = new JsonSerde<>(serdeClass);
jsonSerde.configure(jsonSerdeProperties, false);
Solved. The problem is neither the broker, some docker cache nor the Spring app.
The problem was a console consumer which I used in parallel for debugging. This was an "old" consumer started with kafka-console-consumer.sh --topic=topic --zookeeper=...
It actually prints a warning when started: Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
A "new" consumer with --bootstrap-server option should be used (especially when using Kafka 1.0 with JsonSerializer).
Note: Using an old consumer here can indeed affect the producer.
I just ran a test against that docker image with no problems...
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f093b3f2475c kafkadocker_kafka "start-kafka.sh" 33 minutes ago Up 2 minutes 0.0.0.0:32768->9092/tcp kafkadocker_kafka_1
319365849e48 wurstmeister/zookeeper "/bin/sh -c '/usr/sb…" 33 minutes ago Up 2 minutes 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp kafkadocker_zookeeper_1
.
#SpringBootApplication
public class So47953901Application {
public static void main(String[] args) {
SpringApplication.run(So47953901Application.class, args);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<Object, Object> template) {
return args -> template.send("foo", "bar", "baz");
}
#KafkaListener(id = "foo", topics = "foo")
public void listen(String in) {
System.out.println(in);
}
}
.
spring.kafka.bootstrap-servers=192.168.177.135:32768
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=false
.
2017-12-23 13:27:27.990 INFO 21305 --- [ foo-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [foo-0]
baz
EDIT
Still works for me...
spring.kafka.bootstrap-servers=192.168.177.135:32768
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.JsonDeserializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
.
2017-12-23 15:27:59.997 INFO 44079 --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
...
value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
...
2017-12-23 15:28:00.071 INFO 44079 --- [ foo-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [foo-0]
baz
you are using kafka version <=0.10.x.x
once you using using this, you must set JsonSerializer.ADD_TYPE_INFO_HEADERS to false as below.
Map<String, Object> props = new HashMap<>(defaultSettings);
props.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
ProducerFactory<String, Object> producerFactory = new DefaultKafkaProducerFactory<>(props);
for your producer factory properties.
In case you are using kafka version > 0.10.x.x, it should just work fine

Resources