2018-03-22 11:50:29.175 INFO 12071 --- [io-26681-exec-1] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [ns014:9092]
buffer.memory = 33554432
client.id =
compression.type = none
Kafka in Spring Boot2.0,When I send message the picture value always appear.How can I close it?
I haven't found any changes to the configuration.
If you are using KafkaProducer for configuring Kafka, You can close the channel as below :
Producer<String, String> producer = new KafkaProducer<>(props);
producer.send(new ProducerRecord<String, String>("my-topic",
Integer.toString("any Data"), Integer.toString(i)));
producer.close();
If you are using KafkaTemplate of async calls, You can do it as :
ListenableFuture<SendResult<String, Payload>> sr = kafkaTemplate.send(topic, payload);
kafkaTemplate.flush();
This will flush all settings for particular Kafka Producer.
Related
I'm learning Kafka Streams and I'm getting an error, I have tried a few things but noting works
Input : value_1, value_2, value_3 ...............
public static void main(String[] args) throws InterruptedException {
String host = "127.0.0.1:9092";
String consumer_group = "firstGroup1";
String topic = "test1";
// create properties
Properties properties = new Properties();
properties.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, host);
properties.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, consumer_group);
properties.setProperty(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.StringSerde.class.getName());
properties.setProperty(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.StringSerde.class.getName());
// create a topology
StreamsBuilder builder = new StreamsBuilder();
// input topic
KStream<String, String> inputtopic = builder.stream(topic);
// filter the value
KStream<String, String> filtered_stream = inputtopic.filter((k, v) -> ((v.equalsIgnoreCase("value_5")) || (v.equalsIgnoreCase("value_7")) || (v.equalsIgnoreCase("value_9"))));
filtered_stream.foreach((k, v) -> System.out.println(v));
// output topic set
filtered_stream.to("prime_value");
// build a topology
KafkaStreams kafkaStreams = new KafkaStreams(builder.build(), properties);
// start our stream system
kafkaStreams.start();
}
Error message
1800 [main] INFO org.apache.kafka.streams.processor.internals.assignment.AssignorConfiguration - stream-thread [firstGroup1-d1244e8e-dbc1-4139-8876-ca75cb89c609-StreamThread-1-consumer] Cooperative rebalancing enabled now
1852 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.0.0
1852 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 8cb0a5e9d3441962
1852 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1641673582569
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.kafka.clients.consumer.ConsumerConfig.addDeserializerToConfig(Ljava/util/Map;Lorg/apache/kafka/common/serialization/Deserializer;Lorg/apache/kafka/common/serialization/Deserializer;)Ljava/util/Map;
at org.apache.kafka.streams.processor.internals.StreamThread$InternalConsumerConfig.<init>(StreamThread.java:537)
at org.apache.kafka.streams.processor.internals.StreamThread$InternalConsumerConfig.<init>(StreamThread.java:535)
at org.apache.kafka.streams.processor.internals.StreamThread.<init>(StreamThread.java:527)
at org.apache.kafka.streams.processor.internals.StreamThread.create(StreamThread.java:406)
at org.apache.kafka.streams.KafkaStreams.createAndAddStreamThread(KafkaStreams.java:897)
at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:887)
at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:783)
at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:693)
at com.example.kafkastreams.main(kafkastreams.java:47)
Line no 47: { KafkaStreams kafkaStreams = new KafkaStreams(builder.build(), properties);}
It looks like you have mismatched versions of kafka-clients and kafka-streams - they must be the same version.
When using Spring Boot; you should not add versions for the kafka dependencies; Boot will bring in the correct versions of both libraries.
I have a simple Spring boot service that is called on-demand and consumes specified number of messages from the topic. Number of messages to consume is passed as a parameter. Service is being called every 30 minutes. Each message size is ~1.6 kb. I always get around 1100 or 1200 messages every-time. Have one topic with one partition only and REST service is the only consumer. Here is how the service is called http://example.com/messages?limit=2000
private OutputResponse getNewMessages(String limit) throws Exception {
System.out.println("***** START *****");
final long start = System.nanoTime();
int loopCntr = 0;
int counter = 0;
OutputResponse outputResponse = new OutputResponse();
Output output = new Output();
List<Output> rspListObject = new ArrayList<>();
Consumer<Object, Object> consumer = null;
String result = null;
try {
Properties p = new Properties();
p.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, "180000");
p.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, limit);
consumer = consumerFactory.createConsumer("my-group-id", null, null, p);
consumer.assign(Collections.singleton(new TopicPartition("test-topic", 0)));
while (loopCntr < 2) {
loopCntr++;
ConsumerRecords<Object, Object> consumerRecords = consumer.poll(Duration.ofSeconds(15));
for (ConsumerRecord<Object, Object> record : consumerRecords)
{
counter++;
try
{
//get json string
result = mapper.writeValueAsString(record.value());
//to json
output = mapper.readValue(result, Output.class);
rspListObject.add(output);
} catch (Exception e) {
logger.error(e);
insertToDB(record.value(),record.offset());
}
}
}
outputResponse.setObjects(rspListObject);
final long end = System.nanoTime();
System.out.println("Took: " + ((end - start) / 1000000) + "ms");
System.out.println("Took: " + (end - start) / 1000000000 + " seconds");
// commit the offset of records to broker
if (counter > 0) {
consumer.commitSync();
}
} finally {
try {
System.out.println(" >>>>> closing the consumer");
if (consumer != null)
consumer.close();
}catch(Exception e){
//log message
}
}
return outputResponse;
}
this is what I have in application.yml
spring:
kafka:
consumer:
enable-auto-commit: false
auto-offset-reset: latest
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.json.trusted.packages: '*'
max.poll.interval.ms: 300000
group-id: my-group-id
ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
check.crcs = true
client.dns.lookup = default
client.id =
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = my-group-id
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 180000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
This is an error I am getting at commitSync. Tried with consuming 5 messages when doing poll(), tried with setting p.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, "180000"); but same error
Commit cannot be completed since the group has already rebalanced and
assigned the partitions to another member. This means that the time
between subsequent calls to poll() was longer than the configured
max.poll.interval.ms, which typically implies that the poll loop is
spending too much time message processing. You can address this either
by increasing max.poll.interval.ms or by reducing the maximum size of
batches returned in poll() with max.poll.records.
I believe that this application simulates your use case but it doesn't exhibit the behavior you describe (as I expected). You should never see a rebalance when manually assigning the topic/partition.
I suggest you run both and compare DEBUG logs to figure out what's wrong.
#SpringBootApplication
public class So63713473Application {
public static void main(String[] args) {
SpringApplication.run(So63713473Application.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so63713473").partitions(1).replicas(1).build();
}
#Bean
public ApplicationRunner runner(ConsumerFactory<String, String> factory, KafkaTemplate<String, String> template) {
String msg = new String(new byte[1600]);
return args -> {
while (true) {
System.out.println("Hit enter to run a consumer");
System.in.read();
int count = 0;
try (Consumer<String, String> consumer = factory.createConsumer("so63713473", "")) {
IntStream.range(0, 1200).forEach(i -> template.send("so63713473", msg));
consumer.assign(Collections.singletonList(new TopicPartition("so63713473", 0)));
while (count < 1200) {
ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(5));
count += records.count();
System.out.println("Count=" + count);
}
consumer.commitSync();
System.out.println("Success");
}
}
};
}
}
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.fetch-min-size=1920000
spring.kafka.consumer.fetch-max-wait=1000
spring.kafka.producer.properties.linger.ms=50
EDIT
I can reproduce your issue by adding a second (auto-assigned) consumer in the same group.
#KafkaListener(id = "so63713473", topics = "so63713473")
public void listen(String in) {
System.out.println(in);
}
2020-09-08 16:40:15.828 ERROR 88813 --- [ main] o.s.boot.SpringApplication : Application run failed
java.lang.IllegalStateException: Failed to execute ApplicationRunner
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:789) [spring-boot-2.3.3.RELEASE.jar:2.3.3.RELEASE]
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:776) [spring-boot-2.3.3.RELEASE.jar:2.3.3.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:322) [spring-boot-2.3.3.RELEASE.jar:2.3.3.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1237) [spring-boot-2.3.3.RELEASE.jar:2.3.3.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226) [spring-boot-2.3.3.RELEASE.jar:2.3.3.RELEASE]
at com.example.demo.So63713473Application.main(So63713473Application.java:25) [classes/:na]
Caused by: org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
You can't mix manual and auto assignment in the same group.
I'm using spring boot 2.2.4-RELEASE, spring-kafka 2.4.2.RELEASE
My scenario is the following one:
In my microservice (let's call it producer microservice) I need to create kakfa topic and then, on some circumstances, I need to send message over a single topic.
This message must be received and handled by another microservice (let's call it consumer microservice). In this consumer microservice I must create kafka-listener every time a new topic is created on the server side.
So I wrote the folloqing code
producer microservice
spring kafka config:
#Configuration
public class WebmailKafkaConfig {
#Autowired
private Environment environment;
#Bean
public KafkaAdmin kafkaAdmin(){
Map<String, Object> configuration = new HashMap<String, Object>();
configuration.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, environment.getProperty("webmail.be.messaging.kafka.bootstrap.address"));
KafkaAdmin result = new KafkaAdmin(configuration);
return result;
}
#Bean
public ProducerFactory<String, RicezioneMailMessage> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put( ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, environment.getProperty("webmail.be.messaging.kafka.bootstrap.address"));
configProps.put( ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
//configProps.put( ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put( ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean("ricezioneMailMessageKafkaTemplate")
public KafkaTemplate<String, RicezioneMailMessage> ricezioneMailMessageKafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
spring service kafka manager
#Service
public class WebmailKafkaTopicSvcImpl implements WebmailKafkaTopicSvc {
private static final Logger logger = LoggerFactory.getLogger(WebmailKafkaTopicSvcImpl.class.getName());
#Autowired
private KafkaAdmin kafkaAdmin;
#Value("${webmail.be.messaging.kafka.topic.numero.partizioni}")
private int numeroPartizioni;
#Value("${webmail.be.messaging.kafka.topic.fattore.replica}")
private short fattoreReplica;
#Autowired
#Qualifier("ricezioneMailMessageKafkaTemplate")
private KafkaTemplate<String, RicezioneMailMessage> ricezioneMailMessageKafkaTemplate;
#Override
public void createKafkaTopic(String topicName) throws Exception {
if(!StringUtils.hasText(topicName)){
throw new IllegalArgumentException("Passato un topic name non valido ["+topicName+"]");
}
AdminClient adminClient = null;
try{
adminClient = AdminClient.create(kafkaAdmin.getConfig());
List<NewTopic> topics = new ArrayList<>(1);
NewTopic topic = new NewTopic(topicName, numeroPartizioni, fattoreReplica);
topics.add(topic);
CreateTopicsResult result = adminClient.createTopics(topics);
result.all().whenComplete(new KafkaFuture.BiConsumer<Void, Throwable>() {
#Override
public void accept(Void aVoid, Throwable throwable) {
if( throwable != null ){
logger.error("Errore creazione topic", throwable);
}
}
});
}finally {
if( adminClient != null ){
adminClient.close();
}
}
}
#Override
public void sendMessage(RicezioneMailMessage rmm) throws Exception {
ListenableFuture<SendResult<String, RicezioneMailMessage>> future = ricezioneMailMessageKafkaTemplate.send(rmm.getPk(), rmm);
future.addCallback(new ListenableFutureCallback<SendResult<String, RicezioneMailMessage>>() {
#Override
public void onFailure(Throwable ex) {
if( logger.isWarnEnabled() ){
logger.warn("Impossibile inviare il messaggio=["
+ rmm + "] a causa di : " + ex.getMessage(),ex);
}
}
#Override
public void onSuccess(SendResult<String, RicezioneMailMessage> result) {
if(logger.isTraceEnabled()){
logger.trace("Inviato messaggio=[" + rmm +
"] con offset=[" + result.getRecordMetadata().offset() + "]");
}
}
});
}
}
In the producer side all works pretty good. I'm able in creating topics and sending messages.
consumer microservice
dynamic listener class
public class DynamicKafkaConsumer {
private final String brokerAddress;
private final String topicName;
private boolean stopTest;
private static final Logger logger = LoggerFactory.getLogger(DynamicKafkaConsumer.class.getName());
public DynamicKafkaConsumer(String brokerAddress, String topicName) {
if( !StringUtils.hasText(brokerAddress)){
throw new IllegalArgumentException("Passato un broker address non valido");
}
if( !StringUtils.hasText(topicName)){
throw new IllegalArgumentException("Passato un topicName non valido");
}
this.brokerAddress = brokerAddress;
this.topicName = topicName;
if( logger.isTraceEnabled() ){
logger.trace("Creato {} con topicName {} e brokerAddress {}", this.getClass().getName(), this.topicName, this.brokerAddress);
}
}
public final void start() {
MessageListener<String, RicezioneMailMessage> messageListener = (record -> {
RicezioneMailMessage messaggioRicevuto = record.value();
if( logger.isInfoEnabled() ){
logger.info("Ricevuto messaggio {} su topic {}", messaggioRicevuto, topicName);
}
stopTest = true;
});
ConcurrentMessageListenerContainer<String, RicezioneMailMessage> container =
new ConcurrentMessageListenerContainer<>(
consumerFactory(brokerAddress),
containerProperties(topicName, messageListener));
container.start();
}
private DefaultKafkaConsumerFactory<String, RicezioneMailMessage> consumerFactory(String brokerAddress) {
return new DefaultKafkaConsumerFactory<>(
consumerConfig(brokerAddress),
new StringDeserializer(),
new JsonDeserializer<>(RicezioneMailMessage.class));
}
private ContainerProperties containerProperties(String topic, MessageListener<String, RicezioneMailMessage> messageListener) {
ContainerProperties containerProperties = new ContainerProperties(topic);
containerProperties.setMessageListener(messageListener);
return containerProperties;
}
private Map<String, Object> consumerConfig(String brokerAddress) {
return Map.of(
BOOTSTRAP_SERVERS_CONFIG, brokerAddress,
GROUP_ID_CONFIG, "groupId",
AUTO_OFFSET_RESET_CONFIG, "earliest",
ALLOW_AUTO_CREATE_TOPICS_CONFIG, "false"
);
}
public boolean isStopTest() {
return stopTest;
}
public void setStopTest(boolean stopTest) {
this.stopTest = stopTest;
}
}
simple unit test
public class TestRicezioneMessaggiCasellaPostale {
private static final Logger logger = LoggerFactory.getLogger(TestRicezioneMessaggiCasellaPostale.class.getName());
#Test
public void testRicezioneMessaggiMail() {
try {
String brokerAddress = "localhost:9092";
DynamicKafkaConsumer consumer = new DynamicKafkaConsumer(brokerAddress, "f586caf2-ffdc-4e3a-88b9-a262a502f8ac");
consumer.start();
boolean stopTest = consumer.isStopTest();
while (!stopTest) {
stopTest = consumer.isStopTest();
}
} catch (Exception e) {
logger.error("Errore nella configurazione della casella postale; {}", e.getMessage(), e);
}
}
}
In the consumer side I can't read any message; note that the topic "f586caf2-ffdc-4e3a-88b9-a262a502f8ac" exsists and it's the same topic used on the producer side.
When I send a message on the producer side I can see this log:
2020-02-19 22:00:22,320 52822 [kafka-producer-network-thread | producer-1] TRACE i.e.t.r.p.w.b.s.i.WebmailKafkaTopicSvcImpl - Inviato messaggio=[RicezioneMailMessage{pk='c5c8f8a4-8ddd-407a-9e51-f6b14d84f304', tipoMessaggio='mail'}] con offset=[0]
On the consumer side I don't see any message. I just see the following prints:
2020-02-19 22:00:03,194 1442 [main] INFO o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values:
allow.auto.create.topics = false
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.dns.lookup = default
client.id =
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = groupId
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes
= 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
2020-02-19 22:00:03,630 1878 [main] INFO o.a.k.common.utils.AppInfoParser - Kafka version: 2.4.0
2020-02-19 22:00:03,630 1878 [main] INFO o.a.k.common.utils.AppInfoParser - Kafka commitId: 77a89fcf8d7fa018
2020-02-19 22:00:03,630 1878 [main] INFO o.a.k.common.utils.AppInfoParser - Kafka startTimeMs: 1582146003626
2020-02-19 22:00:03,636 1884 [main] INFO o.a.k.c.consumer.KafkaConsumer - [Consumer clientId=consumer-groupId-1, groupId=groupId] Subscribed to topic(s): f586caf2-ffdc-4e3a-88b9-a262a502f8ac
2020-02-19 22:00:03,645 1893 [main] INFO o.s.s.c.ThreadPoolTaskScheduler - Initializing ExecutorService
2020-02-19 22:00:03,667 1915 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
2020-02-19 22:00:04,123 2371 [consumer-0-C-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-groupId-1, groupId=groupId] Cluster ID: hOOJH-WNTNiXD4il0Y7_0Q
2020-02-19 22:00:05,052 3300 [consumer-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-groupId-1, groupId=groupId] Discovered group coordinator localhost:9092 (id: 2147483647 rack: null)
2020-02-19 22:00:05,059 3307 [consumer-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-groupId-1, groupId=groupId] (Re-)joining group
2020-02-19 22:00:05,116 3364 [consumer-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-groupId-1, groupId=groupId] (Re-)joining group
2020-02-19 22:00:05,154 3402 [consumer-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-groupId-1, groupId=groupId] Finished assignment for group at generation 1: {consumer-groupId-1-41df9153-7c33-46b1-8274-2d7ee2bfb35c=org.apache.kafka.clients.consumer.ConsumerPartitionAssignor$Assignment#a95df1b}
2020-02-19 22:00:05,327 3575 [consumer-0-C-1] INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer clientId=consumer-groupId-1, groupId=groupId] Successfully joined group with generation 1
2020-02-19 22:00:05,335 3583 [consumer-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-groupId-1, groupId=groupId] Adding newly assigned partitions: f586caf2-ffdc-4e3a-88b9-a262a502f8ac-0
2020-02-19 22:00:05,363 3611 [consumer-0-C-1] INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer clientId=consumer-groupId-1, groupId=groupId] Found no committed offset for partition f586caf2-ffdc-4e3a-88b9-a262a502f8ac-0
2020-02-19 22:00:05,401 3649 [consumer-0-C-1] INFO o.a.k.c.c.i.SubscriptionState - [Consumer clientId=consumer-groupId-1, groupId=groupId] Resetting offset for partition f586caf2-ffdc-4e3a-88b9-a262a502f8ac-0 to offset 0.
2020-02-19 22:00:05,404 3652 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Committing on assignment: {f586caf2-ffdc-4e3a-88b9-a262a502f8ac-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}}
2020-02-19 22:00:05,432 3680 [consumer-0-C-1] INFO o.s.k.l.ConcurrentMessageListenerContainer - groupId: partitions assigned: [f586caf2-ffdc-4e3a-88b9-a262a502f8ac-0]
2020-02-19 22:00:08,669 6917 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Received: 0 records
2020-02-19 22:00:08,670 6918 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
2020-02-19 22:00:13,671 11919 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Received: 0 records
2020-02-19 22:00:13,671 11919 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
2020-02-19 22:00:18,673 16921 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Received: 0 records
2020-02-19 22:00:18,673 16921 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
2020-02-19 22:00:23,674 21922 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Received: 0 records
2020-02-19 22:00:23,674 21922 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
2020-02-19 22:00:28,676 26924 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Received: 0 records
2020-02-19 22:00:28,676 26924 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
2020-02-19 22:00:33,677 31925 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Received: 0 records
2020-02-19 22:00:33,677 31925 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
2020-02-19 22:00:38,678 36926 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Received: 0 records
2020-02-19 22:00:38,678 36926 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
2020-02-19 22:00:43,678 41926 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Received: 0 records
2020-02-19 22:00:43,679 41927 [consumer-0-C-1] DEBUG o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer - Commit list: {}
Can anybody tell me where I'm wrong?
Thank you
Angelo
I found the root cause of your code.
Code to send message and log from client side:
ricezioneMailMessageKafkaTemplate.send(rmm.getPk(), rmm);
Inviato messaggio=[RicezioneMailMessage{pk='c5c8f8a4-8ddd-407a-9e51-f6b14d84f304', tipoMessaggio='mail'}] con offset=[0]
Code and log from consumer:
DynamicKafkaConsumer consumer = new DynamicKafkaConsumer(brokerAddress, "f586caf2-ffdc-4e3a-88b9-a262a502f8ac");
2020-02-19 22:00:03,636 1884 [main] INFO o.a.k.c.consumer.KafkaConsumer - [Consumer clientId=consumer-groupId-1, groupId=groupId] Subscribed to topic(s): f586caf2-ffdc-4e3a-88b9-a262a502f8ac
You are sending to topic: c5c8f8a4-8ddd-407a-9e51-f6b14d84f304
You are listening on topic: f586caf2-ffdc-4e3a-88b9-a262a502f8ac
Producer/consumer are sending/listening on difference topics.
I am using this docker-compose setup for setting up Kafka locally: https://github.com/wurstmeister/kafka-docker/
docker-compose up works fine, creating topics via shell works fine.
Now I try to connect to Kafka via spring-kafka:2.1.0.RELEASE
When starting up the Spring application it prints the correct version of Kafka:
o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.0.0
o.a.kafka.common.utils.AppInfoParser : Kafka commitId : aaa7af6d4a11b29d
I try to send a message like this
kafkaTemplate.send("test-topic", UUID.randomUUID().toString(), "test");
Sending on client side fails with
UnknownServerException: The server experienced an unexpected error when processing the request
In the server console I get the message Magic v1 does not support record headers
Error when handling request {replica_id=-1,max_wait_time=100,min_bytes=1,max_bytes=2147483647,topics=[{topic=test-topic,partitions=[{partition=0,fetch_offset=39,max_bytes=1048576}]}]} (kafka.server.KafkaApis)
java.lang.IllegalArgumentException: Magic v1 does not support record headers
Googling suggests a version conflict, but the version seem to fit (org.apache.kafka:kafka-clients:1.0.0 is in the classpath).
Any clues? Thanks!
Edit:
I narrowed down the source of the problem. Sending plain Strings works, but sending Json via JsonSerializer results in the given problem. Here is the content of my producer config:
#Value("\${kafka.bootstrap-servers}")
lateinit var bootstrapServers: String
#Bean
fun producerConfigs(): Map<String, Any> =
HashMap<String, Any>().apply {
// list of host:port pairs used for establishing the initial connections to the Kakfa cluster
put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers)
put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer::class.java)
put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer::class.java)
}
#Bean
fun producerFactory(): ProducerFactory<String, MyClass> =
DefaultKafkaProducerFactory(producerConfigs())
#Bean
fun kafkaTemplate(): KafkaTemplate<String, MyClass> =
KafkaTemplate(producerFactory())
I had a similar issue. Kafka adds headers by default if we use JsonSerializer or JsonSerde for values.
In order to prevent this issue, we need to disable adding info headers.
if you are fine with default json serialization, then use the following (key point here is ADD_TYPE_INFO_HEADERS):
Map<String, Object> props = new HashMap<>(defaultSettings);
props.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
ProducerFactory<String, Object> producerFactory = new DefaultKafkaProducerFactory<>(props);
but if you need a custom JsonSerializer with specific ObjectMapper (like with PropertyNamingStrategy.SNAKE_CASE), you should disable adding info headers explicitly on JsonSerializer, as spring kafka ignores DefaultKafkaProducerFactory's property ADD_TYPE_INFO_HEADERS (as for me it's a bad design of spring kafka)
JsonSerializer<Object> valueSerializer = new JsonSerializer<>(customObjectMapper);
valueSerializer.setAddTypeInfo(false);
ProducerFactory<String, Object> producerFactory = new DefaultKafkaProducerFactory<>(props, Serdes.String().serializer(), valueSerializer);
or if we use JsonSerde, then:
Map<String, Object> jsonSerdeProperties = new HashMap<>();
jsonSerdeProperties.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
JsonSerde<T> jsonSerde = new JsonSerde<>(serdeClass);
jsonSerde.configure(jsonSerdeProperties, false);
Solved. The problem is neither the broker, some docker cache nor the Spring app.
The problem was a console consumer which I used in parallel for debugging. This was an "old" consumer started with kafka-console-consumer.sh --topic=topic --zookeeper=...
It actually prints a warning when started: Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
A "new" consumer with --bootstrap-server option should be used (especially when using Kafka 1.0 with JsonSerializer).
Note: Using an old consumer here can indeed affect the producer.
I just ran a test against that docker image with no problems...
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f093b3f2475c kafkadocker_kafka "start-kafka.sh" 33 minutes ago Up 2 minutes 0.0.0.0:32768->9092/tcp kafkadocker_kafka_1
319365849e48 wurstmeister/zookeeper "/bin/sh -c '/usr/sb…" 33 minutes ago Up 2 minutes 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp kafkadocker_zookeeper_1
.
#SpringBootApplication
public class So47953901Application {
public static void main(String[] args) {
SpringApplication.run(So47953901Application.class, args);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<Object, Object> template) {
return args -> template.send("foo", "bar", "baz");
}
#KafkaListener(id = "foo", topics = "foo")
public void listen(String in) {
System.out.println(in);
}
}
.
spring.kafka.bootstrap-servers=192.168.177.135:32768
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=false
.
2017-12-23 13:27:27.990 INFO 21305 --- [ foo-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [foo-0]
baz
EDIT
Still works for me...
spring.kafka.bootstrap-servers=192.168.177.135:32768
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.JsonDeserializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
.
2017-12-23 15:27:59.997 INFO 44079 --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
...
value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
...
2017-12-23 15:28:00.071 INFO 44079 --- [ foo-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [foo-0]
baz
you are using kafka version <=0.10.x.x
once you using using this, you must set JsonSerializer.ADD_TYPE_INFO_HEADERS to false as below.
Map<String, Object> props = new HashMap<>(defaultSettings);
props.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
ProducerFactory<String, Object> producerFactory = new DefaultKafkaProducerFactory<>(props);
for your producer factory properties.
In case you are using kafka version > 0.10.x.x, it should just work fine
I'm currently trying to use kafka and spring-kafka in order to consumer messages.
But I have trouble executing several consumers for the same topic and have several questions:
1 - My consumers tends to disconnect after some time and have trouble reconnecting
The following WARN is raised regularly on my consumers:
2017-09-06 15:32:35.054 INFO 5203 --- [nListener-0-C-1] f.b.poc.crawler.kafka.KafkaListener : Consuming {"some-stuff": "yes"} from topic [job15]
2017-09-06 15:32:35.054 INFO 5203 --- [nListener-0-C-1] f.b.p.c.w.services.impl.CrawlingService : Start of crawling
2017-09-06 15:32:35.054 INFO 5203 --- [nListener-0-C-1] f.b.p.c.w.services.impl.CrawlingService : Url has already been treated ==> skipping
2017-09-06 15:32:35.054 WARN 5203 --- [nListener-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Auto-commit of offsets {job15-3=OffsetAndMetadata{offset=11547, metadata=''}, job15-2=OffsetAndMetadata{offset=15550, metadata=''}} failed for group group-3: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
2017-09-06 15:32:35.054 INFO 5203 --- [nListener-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [job15-3, job15-2] for group group-3
2017-09-06 15:32:35.054 INFO 5203 --- [nListener-0-C-1] s.k.l.ConcurrentMessageListenerContainer : partitions revoked:[job15-3, job15-2]
2017-09-06 15:32:35.054 INFO 5203 --- [nListener-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group group-3
This cause the consumer to stop and wait for several seconds.
As mentionned in the message, I increased the consumers session.timeout.ms to something like 30000. I still get the message.
As you can see in the provided logs the disconnection occurs right after a record has finished its process.
So ... a lot before 30s of innactivity.
2- Two consumers application receives the same message REALLY often
While looking at my consumers' logs I saw that they tend to treat the same message. I understood Kafka is at-least-once but I never thought I would encounter a lot of duplication.
Hopefully I use redis but I probably have missunderstood some tuning / properties I need to do.
THE CODE
Note: I'm using ConcurrentMessageListenerContainer with auto-commit=true but run with 1 Thread. I just start several instances of the same application because the consumer uses services that aren't thread-safe.
KafkaContext.java
#Slf4j
#Configuration
#EnableConfigurationProperties(value = KafkaConfig.class)
class KafkaContext {
#Bean(destroyMethod = "stop")
public ConcurrentMessageListenerContainer kafkaInListener(IKafkaListener listener, KafkaConfig config) {
final ContainerProperties containerProperties =
new ContainerProperties(config.getIn().getTopic());
containerProperties.setMessageListener(listener);
final DefaultKafkaConsumerFactory<Integer, String> defaultKafkaConsumerFactory =
new DefaultKafkaConsumerFactory<>(consumerConfigs(config));
final ConcurrentMessageListenerContainer messageListenerContainer =
new ConcurrentMessageListenerContainer<>(defaultKafkaConsumerFactory, containerProperties);
messageListenerContainer.setConcurrency(config.getConcurrency());
messageListenerContainer.setAutoStartup(false);
return messageListenerContainer;
}
private Map<String, Object> consumerConfigs(KafkaConfig config) {
final String kafkaHost = config.getHost() + ":" + config.getPort();
log.info("Crawler_Worker connecting to kafka at {} with consumerGroup {}", kafkaHost, config.getIn().getGroupId());
final Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaHost);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
props.put(ConsumerConfig.GROUP_ID_CONFIG, config.getIn().getGroupId());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JacksonNextSerializer.class);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 30000);
return props;
}
}
Listener
#Slf4j
#Component
class KafkaListener implements IKafkaListener {
private final ICrawlingService crawlingService;
#Autowired
public KafkaListener(ICrawlingService crawlingService) {
this.crawlingService = crawlingService;
}
#Override
public void onMessage(ConsumerRecord<Integer, Next> consumerRecord) {
log.info("Consuming {} from topic [{}]", JSONObject.wrap(consumerRecord.value()), consumerRecord.topic());
consumerService.apply(consumerRecord.value());
}
}
The main issue here is that your consumer group is continuously being rebalanced. You are right about increasing session.timeout.ms, but I don't see this config applied in your configuration. Try removing:
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 30000);
and setting:
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 10);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
You can increase MAX_POLL_RECORDS_CONFIG to get a better performance on communication with brokers. But if you process messages in one thread only it is safer to keep this value low.