Facing 'Unexpected error in AddOffsetsToTxnResponse' issue in spring kafka - spring

I am using spring boot 2.2.7.RELEASE and spring Kafka 2.3.8 (and also Kafka chained transaction manager)
confluent kafka is broker.
I am facing some issues while Sending a message to Kafka. here are the logs
2020-07-09 08:28:13.139 ERROR [xxxxxx-component-workflow-handler,,,] 9 --- [_response-4-C-1] essageListenerContainer$ListenerConsumer : Send offsets to transaction failed
org.apache.kafka.common.KafkaException: Unexpected error in AddOffsetsToTxnResponse: The producer attempted to use a producer id which is not currently assigned to its transactional id.
at org.apache.kafka.clients.producer.internals.TransactionManager$AddOffsetsToTxnHandler.handleResponse(TransactionManager.java:1406)
at org.apache.kafka.clients.producer.internals.TransactionManager$TxnRequestHandler.onComplete(TransactionManager.java:1069)
at org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109)
at org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:561)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:553)
at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:425)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:311)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244)
at java.base/java.lang.Thread.run(Thread.java:832)
2020-07-09 08:28:13.153 ERROR [xxxxxx-component-workflow-handler,,,] 9 --- [_response-4-C-1] o.s.k.core.DefaultKafkaProducerFactory : commitTransaction failed: CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer#3d0f1d94, txId=xxxxxx-dagusa-Process-Handler-et3YEAYB1R1F6h-complete_fulfillment_item_response.complete_fulfillment_item_response.4]
org.apache.kafka.common.KafkaException: Cannot execute transactional method because we are in an error state
at org.apache.kafka.clients.producer.internals.TransactionManager.maybeFailWithError(TransactionManager.java:924)
at org.apache.kafka.clients.producer.internals.TransactionManager.lambda$beginCommit$2(TransactionManager.java:296)
at org.apache.kafka.clients.producer.internals.TransactionManager.handleCachedTransactionRequestResult(TransactionManager.java:1008)
at org.apache.kafka.clients.producer.internals.TransactionManager.beginCommit(TransactionManager.java:295)
at org.apache.kafka.clients.producer.KafkaProducer.commitTransaction(KafkaProducer.java:704)
at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.commitTransaction(DefaultKafkaProducerFactory.java:691)
at brave.kafka.clients.TracingProducer.commitTransaction(TracingProducer.java:72)
at org.springframework.kafka.core.KafkaResourceHolder.commit(KafkaResourceHolder.java:58)
at org.springframework.kafka.transaction.KafkaTransactionManager.doCommit(KafkaTransactionManager.java:200)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:743)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:711)
at org.springframework.data.transaction.MultiTransactionStatus.commit(MultiTransactionStatus.java:74)
at org.springframework.data.transaction.ChainedTransactionManager.commit(ChainedTransactionManager.java:150)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:152)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListenerInTx(KafkaMessageListenerContainer.java:1569)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:1546)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1288)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1035)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:949)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: org.apache.kafka.common.KafkaException: Unexpected error in AddOffsetsToTxnResponse: The producer attempted to use a producer id which is not currently assigned to its transactional id.
at org.apache.kafka.clients.producer.internals.TransactionManager$AddOffsetsToTxnHandler.handleResponse(TransactionManager.java:1406)
at org.apache.kafka.clients.producer.internals.TransactionManager$TxnRequestHandler.onComplete(TransactionManager.java:1069)
at org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109)
at org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:561)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:553)
at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:425)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:311)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244)
... 1 common frames omitted
2020-07-09 08:28:13.157 WARN [xxxxxx-component-workflow-handler,,,] 9 --- [_response-4-C-1] o.s.k.core.DefaultKafkaProducerFactory : Error during some operation; producer removed from cache: CloseSafeProducer [delegate=org.apache.kafka.clients.producer.KafkaProducer#3d0f1d94, txId=xxxxxx-dagusa-Process-Handler-et3YEAYB1R1F6h-complete_fulfillment_item_response.complete_fulfillment_item_response.4]
2020-07-09 08:28:13.167 ERROR [xxxxxx-component-workflow-handler,,,] 9 --- [_response-4-C-1] essageListenerContainer$ListenerConsumer : Transaction rolled back
org.springframework.transaction.HeuristicCompletionException: Heuristic completion: outcome state is mixed; nested exception is org.apache.kafka.common.KafkaException: Cannot execute transactional method because we are in an error state
at org.springframework.data.transaction.ChainedTransactionManager.commit(ChainedTransactionManager.java:177)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:152)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListenerInTx(KafkaMessageListenerContainer.java:1569)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:1546)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1288)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1035)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:949)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: org.apache.kafka.common.KafkaException: Cannot execute transactional method because we are in an error state
at org.apache.kafka.clients.producer.internals.TransactionManager.maybeFailWithError(TransactionManager.java:924)
at org.apache.kafka.clients.producer.internals.TransactionManager.lambda$beginCommit$2(TransactionManager.java:296)
at org.apache.kafka.clients.producer.internals.TransactionManager.handleCachedTransactionRequestResult(TransactionManager.java:1008)
at org.apache.kafka.clients.producer.internals.TransactionManager.beginCommit(TransactionManager.java:295)
at org.apache.kafka.clients.producer.KafkaProducer.commitTransaction(KafkaProducer.java:704)
at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.commitTransaction(DefaultKafkaProducerFactory.java:691)
at brave.kafka.clients.TracingProducer.commitTransaction(TracingProducer.java:72)
at org.springframework.kafka.core.KafkaResourceHolder.commit(KafkaResourceHolder.java:58)
at org.springframework.kafka.transaction.KafkaTransactionManager.doCommit(KafkaTransactionManager.java:200)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:743)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:711)
at org.springframework.data.transaction.MultiTransactionStatus.commit(MultiTransactionStatus.java:74)
at org.springframework.data.transaction.ChainedTransactionManager.commit(ChainedTransactionManager.java:150)
... 9 common frames omitted
Caused by: org.apache.kafka.common.KafkaException: Unexpected error in AddOffsetsToTxnResponse: The producer attempted to use a producer id which is not currently assigned to its transactional id.
at org.apache.kafka.clients.producer.internals.TransactionManager$AddOffsetsToTxnHandler.handleResponse(TransactionManager.java:1406)
at org.apache.kafka.clients.producer.internals.TransactionManager$TxnRequestHandler.onComplete(TransactionManager.java:1069)
at org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109)
at org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:561)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:553)
at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:425)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:311)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:244)
... 1 common frames omitted
Here is my config:
kafka sender config:
#Configuration
#EnableKafka
public class KafkaSenderConfig{
#Value("${kafka.servers}")
private String kafkaServers;
#Value("${application.name}")
private String applicationName;
private static final Logger log = LoggerFactory.getLogger(KafkaSenderConfig.class);
#Bean(value = "stringKafkaTransactionManager")
public KafkaTransactionManager<String, String> kafkaStringTransactionManager() {
KafkaTransactionManager<String, String> ktm = new KafkaTransactionManager<String, String>(stringProducerFactory());
ktm.setNestedTransactionAllowed(true);
ktm.setTransactionSynchronization(AbstractPlatformTransactionManager.SYNCHRONIZATION_ALWAYS);
return ktm;
}
#Bean(value = "stringProducerFactory")
#Primary
public ProducerFactory<String, String> stringProducerFactory() {
log.debug("Kafka Servers: " + kafkaServers);
Map<String, Object> config = getConfigs();
//config.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
DefaultKafkaProducerFactory<String, String> defaultKafkaProducerFactory = new DefaultKafkaProducerFactory<>(config);
String randomString=applicationName.replaceAll("\\s+","-").concat("-").concat(StringUtil.getRandomString(14)).concat("-");
defaultKafkaProducerFactory.setTransactionIdPrefix(randomString);
return defaultKafkaProducerFactory;
}
/**
* Create a new Kafka Template for String based Messages
*
* #return
*/
#Bean(value = "stringKafkaTemplate")
#Primary
public KafkaTemplate<String, String> stringKafkaTemplate() {
log.debug("Creating the Kafka Template for String Producer Factory");
return new KafkaTemplate<>(stringProducerFactory(),true);
}
#Bean(name = "chainedStringKafkaTransactionManager")
#Primary
public ChainedKafkaTransactionManager<String, String> chainedTransactionManager(JpaTransactionManager jpaTransactionManager, DataSourceTransactionManager dsTransactionManager) {
return new ChainedKafkaTransactionManager<>(kafkaStringTransactionManager(), jpaTransactionManager, dsTransactionManager);
}
private Map<String, Object> getConfigs() {
Map<String, Object> config = new ConcurrentHashMap<>();
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaServers);
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
config.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
//Try to send msgs out in 100ms even if the batch size is not met
config.put(ProducerConfig.LINGER_MS_CONFIG, 100);
config.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
config.put(ProducerConfig.ACKS_CONFIG, "all");
return config;
}
}
kafka receiver config:
#Configuration
#EnableKafka
public class KafkaReceiverConfig {
// Kafka Server Configuration
#Value("${kafka.servers}")
private String kafkaServers;
// Group Identifier
#Value("${kafka.groupId}")
private String groupId;
// Kafka Max Retry Attempts
#Value("${kafka.retry.maxAttempts:3}")
private Integer retryMaxAttempts;
// Kafka Max Retry Interval
#Value("${kafka.retry.interval:30000}")
private Long retryInterval;
// Kafka Concurrency
#Value("${kafka.concurrency:10}")
private Integer concurrency;
// Kafka Concurrency
#Value("${kafka.poll.timeout:300}")
private Integer pollTimeout;
// Kafka Consumer Offset
#Value("${kafka.consumer.auto-offset-reset:earliest}")
private String offset = "earliest";
#Value("${kafka.max.records:100}")
private Integer maxPollRecords;
#Value("${kafka.max.poll.interval.time:500000}")
private Integer maxPollIntervalMs;
#Value("${kafka.max.session.timeout:60000}")
private Integer sessionTimoutMs;
// String Kafka Template to send Messages
#Autowired
#Qualifier("stringKafkaTemplate")
private KafkaTemplate<String, String> stringKafkaTemplate;
// Logger
private static final Logger log = LoggerFactory.getLogger(KafkaReceiverConfig.class);
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory(
ChainedKafkaTransactionManager<String, String> chainedTM, MessageProducer messageProducer) {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(concurrency);
factory.getContainerProperties().setPollTimeout(pollTimeout);
factory.getContainerProperties().setAckMode(AckMode.RECORD);
factory.getContainerProperties().setSyncCommits(true);
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setTransactionManager(chainedTM);
DefaultAfterRollbackProcessor<String, String> afterRollbackProcessor = new DefaultAfterRollbackProcessor<>(
(record, exception) -> {
log.warn("failed to process kafka message (retries are exausted). topic name:" + record.topic()
+ " value:" + record.value());
messageProducer.saveFailedMessage(record, exception);
}, new FixedBackOff(retryInterval, retryMaxAttempts));
afterRollbackProcessor.setCommitRecovered(true);
afterRollbackProcessor.setKafkaTemplate(stringKafkaTemplate);
factory.setAfterRollbackProcessor(afterRollbackProcessor);
log.debug("Kafka Receiver Config kafkaListenerContainerFactory created");
return factory;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
log.debug("Kafka Receiver Config consumerFactory created");
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new ConcurrentHashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, maxPollRecords);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, maxPollIntervalMs);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, sessionTimoutMs);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, offset);
props.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
log.debug("Kafka Receiver Config consumerConfigs created");
return props;
}
}
here is my code to send message
#Transactional(readOnly = false)
public void initiateOrderUpdate(String jsonString){
// some logic here
stringKafkaTemplate.send("some_tpic", jsonString);
// some logic here
}
previously I was using spring boot 2.1.9 and spring Kafka 2.2.9, everything working fine, but after upgrading to above one I am facing this issue.
Is there any issue with configuration ?

Related

Recoverycallback is running 2 times during retry message in spring kafka 2.2.0

I'm retrying message n times in main topic then after maxattempt exhausted publishing it in DLT topic. But when it gets exhausted calling recoverycallback where I'm throwing custom exception so that errorhandler handler the error using seektocurrenthandler and publish message in the DLT topic. But after n retries it goes in recoverycallback then throw the exception but instead of calling errorhandler it again retying n times calling recoverycallback then handle error in errorhandler. I have tried so many example but didn't work
It's flow like that...................
n retries-> recoverycallback -> again n retries -> recoverycallback ->errorhandler.
public class KafkaQueueConfiguration {
private static Logger log = LoggerFactory.getLogger(KafkaQueueConfiguration.class);
#Autowired
private StringRedisTemplate redisTemplate;
#Value("${bootstrap.ip}")
private String consumerBootstrap;
#Value("${group.id}")
private String consumerGroupId;
#Value("${consumer.offset.type}")
private String autoOffsetType;
#Value("${kafka.backoff.interval}")
private Long fixedInterval;
#Value("${kafka.maximum.poll.records}")
private Integer maxPollRecords;
#Value("${bank-notification-sasl-jaas-config}")
private String saslJaasConfig;
#Value("${bank-notification-json-type-mapping}")
private String jsonTypeMapping;
#Value("${bank-notification-dejson-delegate}")
private String deJsonDelegate;
#Value("${kafka.topics.bank.notification.dlt}")
private String dlqName;
#Value("${kafka-consumer-saslMechanism}")
private String saslMechanism;
#Value("${kafka-consumer-securityProtocol}")
private String securityProtocol;
#Value(("${heartbeat-interval-ms}"))
private String heartbeatInterval;
#Value(("${session-timeout-ms}"))
private String sessionTimeout;
#Value(("${max-poll-interval-ms}"))
private Integer maxPollInterval;
#Value(("${max-retry-attempt-ms}"))
private Integer maxRetryAttempts;
#Value(("${retry-interval-ms}"))
private long retryInterval;
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, consumerBootstrap);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, consumerGroupId);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetType);
props.put(ConsumerConfig.RETRY_BACKOFF_MS_CONFIG,fixedInterval);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG,maxPollInterval);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG,maxPollRecords);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, sessionTimeout);
props.put(SaslConfigs.SASL_JAAS_CONFIG,saslJaasConfig);
props.put(JsonDeserializer.TYPE_MAPPINGS,jsonTypeMapping);
props.put(SaslConfigs.SASL_MECHANISM,saslMechanism);
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,securityProtocol);
props.put(ErrorHandlingDeserializer.VALUE_DESERIALIZER_CLASS,deJsonDelegate);
props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG,heartbeatInterval);
props.put(SslConfigs.SSL_ENDPOINT_IDENTIFICATION_ALGORITHM_CONFIG ,SslConfigs.DEFAULT_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM);
return props;
}
#Bean
public ConsumerFactory<Object, Object> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<Object, Object> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory, KafkaTemplate<Object, Object> template) {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL);
factory.getContainerProperties().setAckOnError(false);
factory.setRetryTemplate(retryTemplate());
factory.setRecoveryCallback((context->{
log.info("Recovery callback");
Acknowledgment ack= (Acknowledgment) context.getAttribute(RetryingMessageListenerAdapter.CONTEXT_ACKNOWLEDGMENT);
log.info("Manually committed..{}",countRecover);
log.info("RetryCount :: {} and last retry exception :: {}",context.getRetryCount(),context.getLastThrowable().getCause().getMessage());
log.info("Retry records :: {}",context.getAttribute(RetryingMessageListenerAdapter.CONTEXT_RECORD));
ConsumerRecord consumerRecord= (ConsumerRecord) context.getAttribute(RetryingMessageListenerAdapter.CONTEXT_RECORD);
EftTransactionDetail eftTransactionDetail= (EftTransactionDetail) consumerRecord.value();
log.info("Retrying records ::{}",eftTransactionDetail);
redisTemplate.opsForValue().setIfAbsent(BankNotificationsConstants.SMS_REDIS_PREFIX + eftTransactionDetail.getAtdEntryId(),eftTransactionDetail.getAtdEntryId());
ack.acknowledge();
throw new ValidationException(SystemTag.HG,context.getLastThrowable().getCause().getMessage(),context.getLastThrowable().getCause().getMessage());
}));
factory.setErrorHandler(errorHandler(publisher(template)));
configurer.configure(factory, kafkaConsumerFactory);
return factory;
}
#Bean
public SeekToCurrentErrorHandler errorHandler(DeadLetterPublishingRecoverer deadLetterPublishingRecoverer) {
return new SeekToCurrentErrorHandler(deadLetterPublishingRecoverer,0);//set maxfailure 0 attempts
}
#Bean
public DeadLetterPublishingRecoverer publisher(KafkaTemplate<Object, Object> template) {
return new DeadLetterPublishingRecoverer(template);
}
public RetryTemplate retryTemplate(){
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setRetryPolicy(new SimpleRetryPolicy(maxRetryAttempts));
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(retryInterval);
retryTemplate.setBackOffPolicy(backOffPolicy);
return retryTemplate;
}
}

KafkaConsumer With Multiple Different Avro Producers And Transactions

I have a single kafka consumer. It consumes a string. Based on the string we then convert it to different avro object and publish them to different topics. We require EOS and the issue we are getting is the producer marked with #Primary works however the one without primary fails with the error below. Is there anyway to accomodate both?
KafkaConsumer
#Configuration
public class KafkaConsumerConfig {
#Value("${kafka.server}")
String server;
#Value("${kafka.consumer.groupid}")
String groupid;
#Autowired
Tracer tracer;
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, server);
config.put(ConsumerConfig.GROUP_ID_CONFIG, groupid);
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
config.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
config.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 120000);
config.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 10000);
//config.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 15000);
return new TracingConsumerFactory<>(new DefaultKafkaConsumerFactory<>(config), tracer);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(
KafkaAwareTransactionManager<Object, Object> transactionManager) {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<String, String>();
factory.setConsumerFactory(consumerFactory());
factory.setAutoStartup(false);
factory.setConcurrency(2);
factory.setBatchListener(true);
factory.getContainerProperties().setAckMode(AckMode.BATCH);
factory.getContainerProperties().setEosMode(EOSMode.ALPHA);
factory.getContainerProperties().setTransactionManager(transactionManager);
return factory;
}
}
KafkaProducer 1
#Configuration
public class KafkaProducerConfig {
#Value("${kafka.server}")
String server;
#Autowired
public Tracer tracer;
String tranId = "eventsanavro";
#Bean(name = "transactionalProducerFactoryAvro")
public ProducerFactory<String, TransactionAvroEntity> producerFactoryavro() {
Map<String, Object> config = new HashMap<>();
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, server);
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, AvroSerializer.class.getName());
config.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
config.put(ProducerConfig.ACKS_CONFIG, "all");
config.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, tranId);
config.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy");
config.put(ProducerConfig.LINGER_MS_CONFIG, "200");
config.put(ProducerConfig.BATCH_SIZE_CONFIG, Integer.toString(256 * 1024));
config.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, 120000);
config.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, 60000);
config.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, 5);
config.put(ProducerConfig.BUFFER_MEMORY_CONFIG, Integer.toString(32768 * 1024));
config.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
config.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, tranId);
return new TracingProducerFactory<>(new DefaultKafkaProducerFactory<>(config), tracer);
}
#Qualifier("transactionalProducerFactoryAvro")
#Bean(name = "transactionalKafkaTemplateAvro")
public KafkaTemplate<String, TransactionAvroEntity> kafkaTemplate() {
return new KafkaTemplate<>(producerFactoryavro());
}
#Qualifier("transactionalProducerFactoryAvro")
#Bean(name = "transactionalKafkaTransactionManagerAvro")
public KafkaAwareTransactionManager<?, ?> kafkaTransactionManager(
ProducerFactory<String, TransactionAvroEntity> producerFactory) {
return new KafkaTransactionManager<>(producerFactory);
}
}
KafkaProducer 2
#Configuration
public class KafkaProducerNonAvroConfig {
#Value("${kafka.server}")
String server;
#Autowired
public Tracer tracer;
String tranId = "eventsannonavro";
#Primary
#Bean(name = "transactionalProducerFactoryNonAvro")
public ProducerFactory<String, String> producerFactoryNonAvro() {
Map<String, Object> config = new HashMap<>();
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, server);
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
config.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
config.put(ProducerConfig.ACKS_CONFIG, "all");
config.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, tranId);
config.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy");
config.put(ProducerConfig.LINGER_MS_CONFIG, "200");
config.put(ProducerConfig.BATCH_SIZE_CONFIG, Integer.toString(256 * 1024));
config.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG,120000);
config.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG,60000);
config.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, 5);
config.put(ProducerConfig.BUFFER_MEMORY_CONFIG, Integer.toString(32768* 1024));
config.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
config.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, tranId);
return new TracingProducerFactory<>(new DefaultKafkaProducerFactory<>(config), tracer);
}
#Primary
#Qualifier("transactionalProducerFactoryNonAvro")
#Bean(name = "transactionalKafkaTemplateNonAvro")
public KafkaTemplate<String, String> kafkatemplate() {
return new KafkaTemplate<>(producerFactoryNonAvro());
}
#Primary
#Qualifier("transactionalProducerFactoryNonAvro")
#Bean(name = "transactionalKafkaTransactionManagerNonAvro")
public KafkaAwareTransactionManager<?, ?> kafkaTransactionManager(ProducerFactory<String, String> producerFactory) {
return new KafkaTransactionManager<>(producerFactory);
}
}
ProducerWrapper
#Service
public class KafkaTopicProducer {
#Autowired
private KafkaTemplate<String, TransactionAvroEntity> kafkaTemplate;
#Autowired
private KafkaTemplate<String, String> kafkaProducerNonAvrokafkaTemplate;
public void topicProducerAvro(TransactionAvroEntity payload, String topic, Headers headers) {
ProducerRecord<String, TransactionAvroEntity> producerRecord = new ProducerRecord<String, TransactionAvroEntity>(
topic, null, UUID.randomUUID().toString(), payload, headers);
kafkaTemplate.send(producerRecord);
}
public void kafkaAvroFlush() {
kafkaTemplate.flush();
}
public void topicProducerNonAvro(String payload, String topic, Headers headers) {
ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>(topic, null,
UUID.randomUUID().toString(), payload, headers);
kafkaProducerNonAvrokafkaTemplate.send(producerRecord);
}
public void kafkaNonAvroFlush() {
kafkaProducerNonAvrokafkaTemplate.flush();
}
}
ERROR
Caused by: java.lang.IllegalStateException: No transaction is in process; possible solutions: run the template operation within the scope of a template.executeInTransaction() operation, start a transaction with #Transactional before invoking the template method, run in a transaction started by a listener container when consuming a record
Full Stack Trace
2022-05-03 09:35:11,358 INFO [nerMoz-0-C-1] o.a.kafka.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-ifhEventSanitizer-1, groupId=ifhEventSanitizer] Seeking to offset 0 for partition za.local.file.singleLineGLTransactionEvent.1-0
2022-05-03 09:35:11,883 INFO [nerMoz-0-C-1] o.a.kafka.clients.producer.KafkaProducer : [Producer clientId=producer-eventsanavroifhEventSanitizer.za.local.file.singleLineGLTransactionEvent.1.0, transactionalId=eventsanavroifhEventSanitizer.za.local.file.singleLineGLTransactionEvent.1.0] Aborting incomplete transaction
2022-05-03 09:35:11,884 ERROR [nerMoz-0-C-1] essageListenerContainer$ListenerConsumer : Transaction rolled back
org.springframework.kafka.listener.ListenerExecutionFailedException: Listener method 'public boolean com.fnb.fin.ifhEventSanitizer.kafka.KafkaConsumerMoz.consume(java.util.List<org.apache.kafka.clients.consumer.ConsumerRecord<java.lang.String, java.lang.String>>,org.apache.kafka.clients.consumer.Consumer<?, ?>)' threw exception; nested exception is java.lang.IllegalStateException: No transaction is in process; possible solutions: run the template operation within the scope of a template.executeInTransaction() operation, start a transaction with #Transactional before invoking the template method, run in a transaction started by a listener container when consuming a record; nested exception is java.lang.IllegalStateException: No transaction is in process; possible solutions: run the template operation within the scope of a template.executeInTransaction() operation, start a transaction with #Transactional before invoking the template method, run in a transaction started by a listener container when consuming a record
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.decorateException(KafkaMessageListenerContainer.java:2372)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeBatchOnMessage(KafkaMessageListenerContainer.java:2008)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeBatchOnMessageWithRecordsOrList(KafkaMessageListenerContainer.java:1978)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeBatchOnMessage(KafkaMessageListenerContainer.java:1930)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeBatchListener(KafkaMessageListenerContainer.java:1842)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.access$2100(KafkaMessageListenerContainer.java:518)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer$1.doInTransactionWithoutResult(KafkaMessageListenerContainer.java:1749)
at org.springframework.transaction.support.TransactionCallbackWithoutResult.doInTransaction(TransactionCallbackWithoutResult.java:36)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:140)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeBatchListenerInTx(KafkaMessageListenerContainer.java:1740)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeBatchListener(KafkaMessageListenerContainer.java:1722)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1704)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeIfHaveRecords(KafkaMessageListenerContainer.java:1274)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1266)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1161)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:832)
Suppressed: org.springframework.kafka.listener.ListenerExecutionFailedException: Restored Stack Trace
at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:363)
at org.springframework.kafka.listener.adapter.BatchMessagingMessageListenerAdapter.invoke(BatchMessagingMessageListenerAdapter.java:180)
at org.springframework.kafka.listener.adapter.BatchMessagingMessageListenerAdapter.onMessage(BatchMessagingMessageListenerAdapter.java:172)
at org.springframework.kafka.listener.adapter.BatchMessagingMessageListenerAdapter.onMessage(BatchMessagingMessageListenerAdapter.java:61)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeBatchOnMessage(KafkaMessageListenerContainer.java:1988)
Caused by: java.lang.IllegalStateException: No transaction is in process; possible solutions: run the template operation within the scope of a template.executeInTransaction() operation, start a transaction with #Transactional before invoking the template method, run in a transaction started by a listener container when consuming a record
at org.springframework.util.Assert.state(Assert.java:76)
at org.springframework.kafka.core.KafkaTemplate.getTheProducer(KafkaTemplate.java:657)
at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:569)
at org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:406)
at com.fnb.fin.ifhEventSanitizer.kafka.KafkaTopicProducer.topicProducerNonAvro(KafkaTopicProducer.java:44)
at com.fnb.fin.ifhEventSanitizer.kafka.KafkaConsumerMoz.consume(KafkaConsumerMoz.java:108)
at jdk.internal.reflect.GeneratedMethodAccessor111.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:171)
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:120)
at org.springframework.kafka.listener.adapter.HandlerAdapter.invoke(HandlerAdapter.java:56)
at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:347)
at org.springframework.kafka.listener.adapter.BatchMessagingMessageListenerAdapter.invoke(BatchMessagingMessageListenerAdapter.java:180)
at org.springframework.kafka.listener.adapter.BatchMessagingMessageListenerAdapter.onMessage(BatchMessagingMessageListenerAdapter.java:172)
at org.springframework.kafka.listener.adapter.BatchMessagingMessageListenerAdapter.onMessage(BatchMessagingMessageListenerAdapter.java:61)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeBatchOnMessage(KafkaMessageListenerContainer.java:1988)
... 16 common frames omitted
The KafkaTransactionManager can only start a transaction in a producer from one factory; even if it could start two, you would lose EOS guarantees since they would be different transactions so, if you perform sends to both, they won't be in the same transaction.
To solve this problem, you should use one producer factory with a DelegatingByTypeSerializer or DelegatingByTopicSerializer.
e.g.
public ProducerFactory<String, Object> producerFactory() {
...
Map<Class<?>, Serializer> delegates = new LinkedHashMap<>(); // retains the order when iterating
delegates.put(String.class, new StringSerializer());
delegates.put(Object.class, new JsonSerializer<>());
DelegatingByTypeSerializer dbts = new DelegatingByTypeSerializer(delegates, true);
return new TracingProducerFactory<>(
new DefaultKafkaProducerFactory<>(config, new StringSerializer(), dbts), tracer);
}

Cannot convert from [java.lang.String] to [com.example.demo.User]

I'm working on Spring Boot and Apache Kafka - trying to have a user defined configurations -
org.springframework.kafka.listener.ListenerExecutionFailedException: Listener method could not be invoked with the incoming message
Endpoint handler details:
Method [public void com.example.demo.Consumer.consume(com.example.demo.User) throws java.io.IOException]
Bean [com.example.demo.Consumer#7cd4a8cc]; nested exception is org.springframework.messaging.converter.MessageConversionException: Cannot handle message; nested exception is org.springframework.messaging.converter.MessageConversionException: Cannot convert from [java.lang.String] to [com.example.demo.User] for GenericMessage [payload={"name":"Prateek","age":33}, headers={kafka_offset=7, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#32375775, kafka_timestampType=CREATE_TIME, kafka_receivedPartitionId=0, kafka_receivedTopic=users, kafka_receivedTimestamp=1613708323636, __TypeId__=[B#165438c, kafka_groupId=group_id}], failedMessage=GenericMessage [payload={"name":"Prateek","age":33}, headers={kafka_offset=7, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#32375775, kafka_timestampType=CREATE_TIME, kafka_receivedPartitionId=0, kafka_receivedTopic=users, kafka_receivedTimestamp=1613708323636, __TypeId__=[B#165438c, kafka_groupId=group_id}]; nested exception is org.springframework.messaging.converter.MessageConversionException: Cannot handle message; nested exception is org.springframework.messaging.converter.MessageConversionException: Cannot convert from [java.lang.String] to [com.example.demo.User] for GenericMessage [payload={"name":"Prateek","age":33}, headers={kafka_offset=7, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#32375775, kafka_timestampType=CREATE_TIME, kafka_receivedPartitionId=0, kafka_receivedTopic=users, kafka_receivedTimestamp=1613708323636, __TypeId__=[B#165438c, kafka_groupId=group_id}], failedMessage=GenericMessage [payload={"name":"Prateek","age":33}, headers={kafka_offset=7, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#32375775, kafka_timestampType=CREATE_TIME, kafka_receivedPartitionId=0, kafka_receivedTopic=users, kafka_receivedTimestamp=1613708323636, __TypeId__=[B#165438c, kafka_groupId=group_id}]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.decorateException(KafkaMessageListenerContainer.java:2110) ~[spring-kafka-2.6.5.jar:2.6.5]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeErrorHandler(KafkaMessageListenerContainer.java:2098) ~[spring-kafka-2.6.5.jar:2.6.5]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:1997) ~[spring-kafka-2.6.5.jar:2.6.5]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:1924) ~[spring-kafka-2.6.5.jar:2.6.5]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:1812) ~[spring-kafka-2.6.5.jar:2.6.5]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1531) ~[spring-kafka-2.6.5.jar:2.6.5]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1178) ~[spring-kafka-2.6.5.jar:2.6.5]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1075) ~[spring-kafka-2.6.5.jar:2.6.5]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_171]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_171]
Caused by: org.springframework.messaging.converter.MessageConversionException: Cannot handle message; nested exception is org.springframework.messaging.converter.MessageConversionException: Cannot convert from [java.lang.String] to [com.example.demo.User] for GenericMessage [payload={"name":"Prateek","age":33}, headers={kafka_offset=7, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#32375775, kafka_timestampType=CREATE_TIME, kafka_receivedPartitionId=0, kafka_receivedTopic=users, kafka_receivedTimestamp=1613708323636, __TypeId__=[B#165438c, kafka_groupId=group_id}], failedMessage=GenericMessage [payload={"name":"Prateek","age":33}, headers={kafka_offset=7, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#32375775, kafka_timestampType=CREATE_TIME, kafka_receivedPartitionId=0, kafka_receivedTopic=users, kafka_receivedTimestamp=1613708323636, __TypeId__=[B#165438c, kafka_groupId=group_id}]
at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:340) ~[spring-kafka-2.6.5.jar:2.6.5]
at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:86) ~[spring-kafka-2.6.5.jar:2.6.5]
at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:51) ~[spring-kafka-2.6.5.jar:2.6.5]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:2065) ~[spring-kafka-2.6.5.jar:2.6.5]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:2047) ~[spring-kafka-2.6.5.jar:2.6.5]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:1984) ~[spring-kafka-2.6.5.jar:2.6.5]
... 8 common frames omitted
Caused by: org.springframework.messaging.converter.MessageConversionException: Cannot convert from [java.lang.String] to [com.example.demo.User] for GenericMessage [payload={"name":"Prateek","age":33}, headers={kafka_offset=7, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#32375775, kafka_timestampType=CREATE_TIME, kafka_receivedPartitionId=0, kafka_receivedTopic=users, kafka_receivedTimestamp=1613708323636, __TypeId__=[B#165438c, kafka_groupId=group_id}]
at org.springframework.messaging.handler.annotation.support.PayloadMethodArgumentResolver.resolveArgument(PayloadMethodArgumentResolver.java:145) ~[spring-messaging-5.3.3.jar:5.3.3]
at org.springframework.kafka.annotation.KafkaListenerAnnotationBeanPostProcessor$KafkaNullAwarePayloadArgumentResolver.resolveArgument(KafkaListenerAnnotationBeanPostProcessor.java:926) ~[spring-kafka-2.6.5.jar:2.6.5]
at org.springframework.messaging.handler.invocation.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:117) ~[spring-messaging-5.3.3.jar:5.3.3]
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:148) ~[spring-messaging-5.3.3.jar:5.3.3]
at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:116) ~[spring-messaging-5.3.3.jar:5.3.3]
at org.springframework.kafka.listener.adapter.HandlerAdapter.invoke(HandlerAdapter.java:48) ~[spring-kafka-2.6.5.jar:2.6.5]
at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:329) ~[spring-kafka-2.6.5.jar:2.6.5]
... 13 common frames omitted
How can we solve this?
Below is my code -
User.java
#Builder
#Data
#NoArgsConstructor
#AllArgsConstructor
public class User {
private String name;
private int age;
}
KafkaProducerConfig.java
#Configuration
public class KafkaProducerConfig {
#Value(value = "${kafka.bootstrapAddress}")
private String bootstrapAddress;
// 1. Send string to Kafka
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(props);
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
// 2. Send User objects to Kafka
#Bean
public ProducerFactory<String, User> userProducerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, User> userKafkaTemplate() {
return new KafkaTemplate<>(userProducerFactory());
}
}
Producer.java
#Service
public class Producer {
private static final Logger logger = LoggerFactory.getLogger(Producer.class);
private static final String TOPIC = "users";
#Autowired
private KafkaTemplate<String, User> kafkaTemplate;
public void sendMessage(User user) {
logger.info(String.format("#### -> Producing message -> %s", user.toString()));
this.kafkaTemplate.send(TOPIC, user);
}
}
KafkaConsumerConfig.java
#Configuration
public class KafkaConsumerConfig {
#Value(value = "${kafka.bootstrapAddress}")
private String bootstrapAddress;
#Value(value = "${general.topic.group.id}")
private String groupId;
#Value(value = "${user.topic.group.id}")
private String userGroupId;
// 1. Consume string data from Kafka
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
return new DefaultKafkaConsumerFactory<>(props);
}
// 2. Consume user objects from Kafka
public ConsumerFactory<String, User> userConsumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, userGroupId);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new JsonDeserializer<>(User.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, User> userKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, User> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(userConsumerFactory());
factory.setMessageConverter(new StringJsonMessageConverter());
return factory;
}
}
Consumer.java
#Service
public class Consumer {
private final Logger logger = LoggerFactory.getLogger(Producer.class);
#KafkaListener(topics = "users", groupId = "group_id")
public void consume(User user) throws IOException {
logger.info(String.format("#### -> Consumed message -> %s", user.toString()));
}
}
KafkaController.java
#RestController
#RequestMapping(value = "/kafka")
public class KafkaController {
private final Producer producer;
#Autowired
KafkaController(Producer producer) {
this.producer = producer;
}
#PostMapping(value = "/publish")
public void sendMessageToKafkaTopic(#RequestBody User user) {
this.producer.sendMessage(user);
}
}
KafkaExampleApplication.java
#SpringBootApplication
public class KafkaExampleApplication {
public static void main(String[] args) {
SpringApplication.run(KafkaExampleApplication.class, args);
}
}
application.properties
server.port=9000
kafka.bootstrapAddress=localhost:9092
general.topic.group.id=group_id
user.topic.group.id=MyGrpId
Try to add this into your #KafkaListener annotation, like in second clause in #Lotzy answer:
containerFactory = userKafkaListenerContainerFactory
In your Producer class, you need to rename the kafkaTemplate to userKafkaTemplate, which is wired to the User producer factory with the JSON serializer you want to use.
In fact, you should probably remove the other template and producer factory if you're not using them
In the KafkaConsumerConfig.java configuration of bean userConsumerFactory() you are using userGroupId="MyGrpId" via #Value wiring from property user.topic.group.id=MyGrpId. But then in the Consumer.java at the #KafkaListener annotation, it is specified groupId = "group_id".
In the same annotation, would be better to specify the containerFactory instead of the groupId, like this #KafkaListener(topics = "users", containerFactory = "userKafkaListenerContainerFactory"). This way the annotation will wire exactly the wanted ConsumerFactory bean configuration via the ConcurrentKafkaListenerContainerFactory.
Also, I agree with #OneCricketeer, that in the Producer you should qualify the autowired kafkaTemplate to the one that produces User objects of you later want to consume JSON deserializable to User objects, like this:
#Autowired
#Qualifier("userKafkaTemplate")
private KafkaTemplate<String, User> kafkaTemplate;
More details here: Baeldung - Apache Kafka with Spring
You should add this to your properties to let deserializer know what to cast into
properties.put(JsonDeserializer.VALUE_DEFAULT_TYPE,com.example.demo.User.class);

How to load Kafka Consumer lazily in Spring boot?

I want to provide group Id through command line argument but when I tried this I got following error.
Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is java.lang.IllegalStateException: No group.id found in consumer config, container properties, or #KafkaListener annotation; a group.id is required when group management is used.
That means while loading kafkalistener it required group_id. If I gave groupId in consumerConfig file then Its working properly.
So is there any way so that I can give group Id through command line and kafka listener loads lazily So that I will not require while program starting.
My ConsumerConfig :
#Configuration
class KafkaConsumerConfig {
#Value("${kafka.bootstrap-servers}")
private String bootstrapServers;
#Autowired
private ArgumentModel argumentModel;
private Logger logger = LoggerFactory.getLogger(KafkaConsumerConfig.class);
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
logger.info("bootstrapServers : {}", bootstrapServers);
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, argumentModel.getKafkaGroupId());
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
#KafkaListener(... groupId = "${group.id}")
Then pass -Dgroup.id=myGroup on the command line.

How to use Retry Tempate (backOffPolicy) with AfterRollbackProcessor in Spring Kafka 2.2.9.RELEASE

I am using a spring boot 2.1.9 with Kafka and MySQL and also implemented a chained transaction manager.
I want to set the backOffPolicy so that the retry can happen after a certain time. it's possible in the new spring Kafka version but due to some other dependencies, I could not able to upgrade spring boot.
As of now, I am using AfterRollbackProcessor to handle failed messages, now I want to implement backoffPolicy with AfterRollbackProcessor using Spring Kafka 2.2.9.RELEASE. Is there any way to implement it?
here is reciever config file:
#Configuration
#EnableKafka
public class KafkaReceiverConfig {
// Kafka Server Configuration
#Value("${kafka.servers}")
private String kafkaServers;
// Group Identifier
#Value("${kafka.groupId}")
private String groupId;
// Kafka Max Retry Attempts
#Value("${kafka.retry.maxAttempts:5}")
private Integer retryMaxAttempts;
// Kafka Max Retry Interval
#Value("${kafka.retry.interval:180000}")
private Long retryInterval;
// Kafka Concurrency
#Value("${kafka.concurrency:10}")
private Integer concurrency;
// Kafka Concurrency
#Value("${kafka.poll.timeout:300}")
private Integer pollTimeout;
// Kafka Consumer Offset
#Value("${kafka.consumer.auto-offset-reset:earliest}")
private String offset = "earliest";
#Value("${kafka.max.records:100}")
private Integer maxPollRecords;
#Value("${kafka.max.poll.interval.time:500000}")
private Integer maxPollIntervalMs;
#Value("${kafka.max.session.timeout:60000}")
private Integer sessionTimoutMs;
// Logger
private static final Logger log = LoggerFactory.getLogger(KafkaReceiverConfig.class);
/**
* String Kafka Listener Container Factor
*
* #return #see {#link KafkaListenerContainerFactory}
*/
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory(
ChainedKafkaTransactionManager<String, String> chainedTM, MessageProducer messageProducer) {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(concurrency);
factory.getContainerProperties().setPollTimeout(pollTimeout);
factory.getContainerProperties().setAckMode(AckMode.RECORD);
factory.getContainerProperties().setSyncCommits(true);
// factory.setRetryTemplate(retryTemplate());
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setTransactionManager(chainedTM);
// factory.setStatefulRetry(true);
AfterRollbackProcessor<String, String> afterRollbackProcessor = new DefaultAfterRollbackProcessor<>(
(record, exception) -> {
log.warn("failed to process kafka message (retries are exausted). topic name:" + record.topic()
+ " value:" + record.value());
messageProducer.saveFailedMessage(record, exception);
}, retryMaxAttempts);
factory.setAfterRollbackProcessor(afterRollbackProcessor);
log.debug("Kafka Receiver Config kafkaListenerContainerFactory created");
return factory;
}
/**
* String Consumer Factory
*
* #return #see {#link ConsumerFactory}
*/
#Bean
public ConsumerFactory<String, String> consumerFactory() {
log.debug("Kafka Receiver Config consumerFactory created");
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
/**
* Consumer Configurations
*
* #return #see {#link Map}
*/
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new ConcurrentHashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, maxPollRecords);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, maxPollIntervalMs);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, sessionTimoutMs);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, offset);
props.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
log.debug("Kafka Receiver Config consumerConfigs created");
return props;
}
}
You can use listener retry but it MUST be stateful (you have that commented out). Otherwise, the retries will be performed within the transaction which is generally not what you want.
With stateful retry, the template throws the exception after it backs off; then the after rollback processor will perform a re-seek so the record is reprocessed.
As you say, in 2.3 we added a BackOff to the after rollback processor to make it easier to configure everything all in one place.

Resources