Kafka ConsumerInterceptor onCommit not being called when using transactions - spring-boot

I'm using Spring Kafka in a Spring Boot application. I'm attempting to use a Kafka ConsumerInterceptor to intercept when offsets are committed.
This seems to work producers transactions are not enabled but transactions are turned on Interceptor::onCommit is no longer called.
The following minimal example everything works as expected:
#SpringBootApplication
#EnableKafka
class Application {
#KafkaListener(topics = ["test"])
fun onMessage(message: String) {
log.warn("onMessage: $message")
}
Interceptor:
class Interceptor : ConsumerInterceptor<String, String> {
override fun onCommit(offsets: MutableMap<TopicPartition, OffsetAndMetadata>) {
log.warn("onCommit: $offsets")
}
override fun onConsume(records: ConsumerRecords<String, String>): ConsumerRecords<String, String> {
log.warn("onConsume: $records")
return records
}
}
Application config:
spring:
kafka:
consumer:
enable-auto-commit: false
auto-offset-reset: earliest
properties:
"interceptor.classes": com.example.Interceptor
group-id: test-group
listener:
ack-mode: record
Inside a test using #EmbeddedKafka:
#Test
fun sendMessage() {
kafkaTemplate.send("test", "id", "sent message").get() // block so we don't end before the consumer gets the message
}
This outputs what I would expect:
onConsume: org.apache.kafka.clients.consumer.ConsumerRecords#6a646f3c
onMessage: sent message
onCommit: {test-0=OffsetAndMetadata{offset=1, leaderEpoch=null, metadata=''}}
However, when I enabled transactions by providing a transaction-id-prefix the Interceptor's onCommit is no longer called.
My updated config only adds:
spring:
kafka:
producer:
transaction-id-prefix: tx-id-
And the test is updated to wrap send in a transaction:
#Test
fun sendMessage() {
kafkaTemplate.executeInTransaction {
kafkaTemplate.send("test", "a", "sent message").get()
}
}
With this change my log output is now only
onConsume: org.apache.kafka.clients.consumer.ConsumerRecords#738b5968
onMessage: sent message
The Interceptor's onConsume method is called and the #KafkaListener receives the message but onCommit is never called.
Does anyone happen to know whats happening here? Are my expectations about what I should see here incorrect?

Offsets are not committed via the consumer when using transactions (exactly once semantics). Instead, the offset is committed via the producer.
KafkaProducer...
/**
* Sends a list of specified offsets to the consumer group coordinator, and also marks
* those offsets as part of the current transaction. These offsets will be considered
* committed only if the transaction is committed successfully. The committed offset should
* be the next message your application will consume, i.e. lastProcessedMessageOffset + 1.
* <p>
* This method should be used when you need to batch consumed and produced messages
* together, typically in a consume-transform-produce pattern. Thus, the specified
* {#code groupMetadata} should be extracted from the used {#link KafkaConsumer consumer} via
* {#link KafkaConsumer#groupMetadata()} to leverage consumer group metadata. This will provide
* stronger fencing than just supplying the {#code consumerGroupId} and passing in {#code new ConsumerGroupMetadata(consumerGroupId)},
* however note that the full set of consumer group metadata returned by {#link KafkaConsumer#groupMetadata()}
* requires the brokers to be on version 2.5 or newer to understand.
*
* <p>
* Note, that the consumer should have {#code enable.auto.commit=false} and should
* also not commit offsets manually (via {#link KafkaConsumer#commitSync(Map) sync} or
* {#link KafkaConsumer#commitAsync(Map, OffsetCommitCallback) async} commits).
* This method will raise {#link TimeoutException} if the producer cannot send offsets before expiration of {#code max.block.ms}.
* Additionally, it will raise {#link InterruptException} if interrupted.
*
* #throws IllegalStateException if no transactional.id has been configured or no transaction has been started.
* #throws ProducerFencedException fatal error indicating another producer with the same transactional.id is active
* #throws org.apache.kafka.common.errors.UnsupportedVersionException fatal error indicating the broker
* does not support transactions (i.e. if its version is lower than 0.11.0.0) or
* the broker doesn't support latest version of transactional API with all consumer group metadata
* (i.e. if its version is lower than 2.5.0).
* #throws org.apache.kafka.common.errors.UnsupportedForMessageFormatException fatal error indicating the message
* format used for the offsets topic on the broker does not support transactions
* #throws org.apache.kafka.common.errors.AuthorizationException fatal error indicating that the configured
* transactional.id is not authorized, or the consumer group id is not authorized.
* #throws org.apache.kafka.clients.consumer.CommitFailedException if the commit failed and cannot be retried
* (e.g. if the consumer has been kicked out of the group). Users should handle this by aborting the transaction.
* #throws org.apache.kafka.common.errors.FencedInstanceIdException if this producer instance gets fenced by broker due to a
* mis-configured consumer instance id within group metadata.
* #throws org.apache.kafka.common.errors.InvalidProducerEpochException if the producer has attempted to produce with an old epoch
* to the partition leader. See the exception for more details
* #throws KafkaException if the producer has encountered a previous fatal or abortable error, or for any
* other unexpected error
* #throws TimeoutException if the time taken for sending offsets has surpassed max.block.ms.
* #throws InterruptException if the thread is interrupted while blocked
*/
public void sendOffsetsToTransaction(Map<TopicPartition, OffsetAndMetadata> offsets,
ConsumerGroupMetadata groupMetadata) throws ProducerFencedException {

Related

Filter messages before executing #RabbitListener

How can I filter a message before is processed by a #RabbitListener annotated method ?
If the message is for. ex. is "duplicated" because contains an header with a determinate value I would like to return "ack" and skip processing. (skip the body of #RabbitListener method)
I tried to do it in a MessagePostProcessor (with addAfterReceivePostProcessors) but cannot skip execution for ex. based on a message property (header).
This is the signature of MessageProcessor :
Message postProcessMessage(Message message) throws AmqpException;
I would like to return an "ack" here so the message processing is skipped.
thank you for your support.
I think an AmqpRejectAndDontRequeueException is what you need to throw from your MessagePostProcessor impl.
See its javadocs:
/**
* Exception for listener implementations used to indicate the
* basic.reject will be sent with requeue=false in order to enable
* features such as DLQ.
* #author Gary Russell
* #since 1.0.1
*
*/
#SuppressWarnings("serial")
public class AmqpRejectAndDontRequeueException extends AmqpException {
And respective docs: https://docs.spring.io/spring-amqp/docs/current/reference/html/#exception-handling

How to disable container stop when kafka error spring kafka

When I have authentication error on my kafka broker, the application drops the container.
Is there any way to disable this behavior?
I'm using KafkaListener
#KafkaListener(topics = "${spring.kafka.consumer.topic}", concurrency = "${listen.concurrency:3}")
public void topicListener(Product<String> message) {
getLogger().info("message consumed");
}
Let me know if I need to post other settings..
See authorizationExceptionRetryInterval option on the ConsumerProperties:
/**
* Set the interval between retries after {#code AuthorizationException} is thrown
* by {#code KafkaConsumer}. By default the field is null and retries are disabled.
* In such case the container will be stopped.
*
* The interval must be less than {#code max.poll.interval.ms} consumer property.
*
* #param authorizationExceptionRetryInterval the duration between retries
* #since 2.3.5
*/
public void setAuthorizationExceptionRetryInterval(Duration authorizationExceptionRetryInterval) {
See docs for more info: https://docs.spring.io/spring-kafka/docs/current/reference/html/#kafka-container
By default, no interval is configured - authorization errors are considered fatal, which causes the container to stop.

Multiple Kafka Producer Instance for each Http Request

I have a rest end point which can be invoked by multiple users at same time. This rest end point invokes a Transactional Kafka Producer.
What I understand is I cant use same Kafka Producer instance at same time if we use Transaction.
How can I create a new Kafka Producer Instance for each HTTP request efficiently ?
//Kafka Transaction enabled
producerProps.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
producerProps.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, "prod-1-" );
#Service
public class ProducerService {
#Autowired
private KafkaTemplate<Object, Object> kafkaTemplate;
public void postMessage(final MyUser message) {
// wrapping the send method in a transaction
this.kafkaTemplate.executeInTransaction(kafkaTemplate -> {
kafkaTemplate.send("custom", null, message);
}
}
See the javadocs for the DefaultKafkaProducerFactory. It maintains a cache of producers for producer-initiated transactions.
/**
* The {#link ProducerFactory} implementation for a {#code singleton} shared {#link Producer} instance.
* <p>
* This implementation will return the same {#link Producer} instance (if transactions are
* not enabled) for the provided {#link Map} {#code configs} and optional {#link Serializer}
* implementations on each {#link #createProducer()} invocation.
...
* Setting {#link #setTransactionIdPrefix(String)} enables transactions; in which case, a
* cache of producers is maintained; closing a producer returns it to the cache. The
* producers are closed and the cache is cleared when the factory is destroyed, the
* application context stopped, or the {#link #reset()} method is called.
...
*/

How to implement Spring Retry for SocketTimeoutException from Rest Template

I want to use Spring retry functionality in case of 'SocketTimeoutException' from rest template.
but spring Rest template throwing like bellow:
org.springframework.web.client.ResourceAccessException: I/O error: Read timed out; nested exception is java.net.SocketTimeoutException: Read timed out
I have added SocketTimeoutException in Retry Template Map.
Spring retry works only if I add SocketTimeoutException in Retry Template Map or Do I need to add ResourceAccessException also.
You need to use a custom SimpleRetryPolicy that has the traverseCauses option set. Then, instead of just looking at the top level exception, it will examine the cause hierarchy to look for a match.
/**
* Create a {#link SimpleRetryPolicy} with the specified number of retry
* attempts. If traverseCauses is true, the exception causes will be traversed until
* a match is found.
*
* #param maxAttempts the maximum number of attempts
* #param retryableExceptions the map of exceptions that are retryable based on the
* map value (true/false).
* #param traverseCauses is this clause traversable
*/
public SimpleRetryPolicy(int maxAttempts, Map<Class<? extends Throwable>, Boolean> retryableExceptions,
boolean traverseCauses) {
this(maxAttempts, retryableExceptions, traverseCauses, false);
}

How to set Durable Subscriber in DefaultMessageListenerContainer in spring?

Producer of the message is not sending message as persistent and when i am trying to consume the message through MessageListener, and any exception(runtime) occurs, it retries for specific number of times (default is 6 from AMQ side) and message get lost.
Reason is that since producer is not setting the Delivery mode as Persistent, after certain number of retry attempt, DLQ is not being created and message does not move to DLQ. Due to this , i lost the message.
My Code is like this :-
#Configuration
#PropertySource("classpath:application.properties")
public class ActiveMqJmsConfig {
#Autowired
private AbcMessageListener abcMessageListener;
public DefaultMessageListenerContainer purchaseMsgListenerforAMQ(
#Qualifier("AMQConnectionFactory") ConnectionFactory amqConFactory) {
LOG.info("Message listener for purchases from AMQ : Starting");
DefaultMessageListenerContainer defaultMessageListenerContainer =
new DefaultMessageListenerContainer();
defaultMessageListenerContainer.setConnectionFactory(amqConFactory);
defaultMessageListenerContainer.setMaxConcurrentConsumers(4);
defaultMessageListenerContainer
.setDestinationName(purchaseReceivingQueueName);
defaultMessageListenerContainer
.setMessageListener(abcMessageListener);
defaultMessageListenerContainer.setSessionTransacted(true);
return defaultMessageListenerContainer;
}
#Bean
#Qualifier(value = "AMQConnectionFactory")
public ConnectionFactory activeMQConnectionFactory() {
ActiveMQConnectionFactory amqConnectionFactory =
new ActiveMQConnectionFactory();
amqConnectionFactory
.setBrokerURL(System.getProperty(tcp://localhost:61616));
amqConnectionFactory
.setUserName(System.getProperty(admin));
amqConnectionFactory
.setPassword(System.getProperty(admin));
return amqConnectionFactory;
}
}
#Component
public class AbcMessageListener implements MessageListener {
#Override
public void onMessage(Message msg) {
//CODE implementation
}
}
Problem :- By setting the client-id at connection level (Connection.setclientid("String")), we can subscribe as durable subscriber even though message is not persistent. By doing this, if application throws runtime exception , after a certain number of retry attempt, DLQ will be created for the Queue and message be moved to DLQ.
But in DefaultMessageListenerContainer, connection is not exposed to client. it is maintained by Class itself as a pool, i guess.
How can i achieve the durable subscription in DefaultMessageListenerContainer?
You can set the client id on the container instead:
/**
* Specify the JMS client ID for a shared Connection created and used
* by this container.
* <p>Note that client IDs need to be unique among all active Connections
* of the underlying JMS provider. Furthermore, a client ID can only be
* assigned if the original ConnectionFactory hasn't already assigned one.
* #see javax.jms.Connection#setClientID
* #see #setConnectionFactory
*/
public void setClientId(#Nullable String clientId) {
this.clientId = clientId;
}
and
/**
* Set the name of a durable subscription to create. This method switches
* to pub-sub domain mode and activates subscription durability as well.
* <p>The durable subscription name needs to be unique within this client's
* JMS client id. Default is the class name of the specified message listener.
* <p>Note: Only 1 concurrent consumer (which is the default of this
* message listener container) is allowed for each durable subscription,
* except for a shared durable subscription (which requires JMS 2.0).
* #see #setPubSubDomain
* #see #setSubscriptionDurable
* #see #setSubscriptionShared
* #see #setClientId
* #see #setMessageListener
*/
public void setDurableSubscriptionName(#Nullable String durableSubscriptionName) {
this.subscriptionName = durableSubscriptionName;
this.subscriptionDurable = (durableSubscriptionName != null);
}

Resources