How to disable container stop when kafka error spring kafka - spring

When I have authentication error on my kafka broker, the application drops the container.
Is there any way to disable this behavior?
I'm using KafkaListener
#KafkaListener(topics = "${spring.kafka.consumer.topic}", concurrency = "${listen.concurrency:3}")
public void topicListener(Product<String> message) {
getLogger().info("message consumed");
}
Let me know if I need to post other settings..

See authorizationExceptionRetryInterval option on the ConsumerProperties:
/**
* Set the interval between retries after {#code AuthorizationException} is thrown
* by {#code KafkaConsumer}. By default the field is null and retries are disabled.
* In such case the container will be stopped.
*
* The interval must be less than {#code max.poll.interval.ms} consumer property.
*
* #param authorizationExceptionRetryInterval the duration between retries
* #since 2.3.5
*/
public void setAuthorizationExceptionRetryInterval(Duration authorizationExceptionRetryInterval) {
See docs for more info: https://docs.spring.io/spring-kafka/docs/current/reference/html/#kafka-container
By default, no interval is configured - authorization errors are considered fatal, which causes the container to stop.

Related

Kafka ConsumerInterceptor onCommit not being called when using transactions

I'm using Spring Kafka in a Spring Boot application. I'm attempting to use a Kafka ConsumerInterceptor to intercept when offsets are committed.
This seems to work producers transactions are not enabled but transactions are turned on Interceptor::onCommit is no longer called.
The following minimal example everything works as expected:
#SpringBootApplication
#EnableKafka
class Application {
#KafkaListener(topics = ["test"])
fun onMessage(message: String) {
log.warn("onMessage: $message")
}
Interceptor:
class Interceptor : ConsumerInterceptor<String, String> {
override fun onCommit(offsets: MutableMap<TopicPartition, OffsetAndMetadata>) {
log.warn("onCommit: $offsets")
}
override fun onConsume(records: ConsumerRecords<String, String>): ConsumerRecords<String, String> {
log.warn("onConsume: $records")
return records
}
}
Application config:
spring:
kafka:
consumer:
enable-auto-commit: false
auto-offset-reset: earliest
properties:
"interceptor.classes": com.example.Interceptor
group-id: test-group
listener:
ack-mode: record
Inside a test using #EmbeddedKafka:
#Test
fun sendMessage() {
kafkaTemplate.send("test", "id", "sent message").get() // block so we don't end before the consumer gets the message
}
This outputs what I would expect:
onConsume: org.apache.kafka.clients.consumer.ConsumerRecords#6a646f3c
onMessage: sent message
onCommit: {test-0=OffsetAndMetadata{offset=1, leaderEpoch=null, metadata=''}}
However, when I enabled transactions by providing a transaction-id-prefix the Interceptor's onCommit is no longer called.
My updated config only adds:
spring:
kafka:
producer:
transaction-id-prefix: tx-id-
And the test is updated to wrap send in a transaction:
#Test
fun sendMessage() {
kafkaTemplate.executeInTransaction {
kafkaTemplate.send("test", "a", "sent message").get()
}
}
With this change my log output is now only
onConsume: org.apache.kafka.clients.consumer.ConsumerRecords#738b5968
onMessage: sent message
The Interceptor's onConsume method is called and the #KafkaListener receives the message but onCommit is never called.
Does anyone happen to know whats happening here? Are my expectations about what I should see here incorrect?
Offsets are not committed via the consumer when using transactions (exactly once semantics). Instead, the offset is committed via the producer.
KafkaProducer...
/**
* Sends a list of specified offsets to the consumer group coordinator, and also marks
* those offsets as part of the current transaction. These offsets will be considered
* committed only if the transaction is committed successfully. The committed offset should
* be the next message your application will consume, i.e. lastProcessedMessageOffset + 1.
* <p>
* This method should be used when you need to batch consumed and produced messages
* together, typically in a consume-transform-produce pattern. Thus, the specified
* {#code groupMetadata} should be extracted from the used {#link KafkaConsumer consumer} via
* {#link KafkaConsumer#groupMetadata()} to leverage consumer group metadata. This will provide
* stronger fencing than just supplying the {#code consumerGroupId} and passing in {#code new ConsumerGroupMetadata(consumerGroupId)},
* however note that the full set of consumer group metadata returned by {#link KafkaConsumer#groupMetadata()}
* requires the brokers to be on version 2.5 or newer to understand.
*
* <p>
* Note, that the consumer should have {#code enable.auto.commit=false} and should
* also not commit offsets manually (via {#link KafkaConsumer#commitSync(Map) sync} or
* {#link KafkaConsumer#commitAsync(Map, OffsetCommitCallback) async} commits).
* This method will raise {#link TimeoutException} if the producer cannot send offsets before expiration of {#code max.block.ms}.
* Additionally, it will raise {#link InterruptException} if interrupted.
*
* #throws IllegalStateException if no transactional.id has been configured or no transaction has been started.
* #throws ProducerFencedException fatal error indicating another producer with the same transactional.id is active
* #throws org.apache.kafka.common.errors.UnsupportedVersionException fatal error indicating the broker
* does not support transactions (i.e. if its version is lower than 0.11.0.0) or
* the broker doesn't support latest version of transactional API with all consumer group metadata
* (i.e. if its version is lower than 2.5.0).
* #throws org.apache.kafka.common.errors.UnsupportedForMessageFormatException fatal error indicating the message
* format used for the offsets topic on the broker does not support transactions
* #throws org.apache.kafka.common.errors.AuthorizationException fatal error indicating that the configured
* transactional.id is not authorized, or the consumer group id is not authorized.
* #throws org.apache.kafka.clients.consumer.CommitFailedException if the commit failed and cannot be retried
* (e.g. if the consumer has been kicked out of the group). Users should handle this by aborting the transaction.
* #throws org.apache.kafka.common.errors.FencedInstanceIdException if this producer instance gets fenced by broker due to a
* mis-configured consumer instance id within group metadata.
* #throws org.apache.kafka.common.errors.InvalidProducerEpochException if the producer has attempted to produce with an old epoch
* to the partition leader. See the exception for more details
* #throws KafkaException if the producer has encountered a previous fatal or abortable error, or for any
* other unexpected error
* #throws TimeoutException if the time taken for sending offsets has surpassed max.block.ms.
* #throws InterruptException if the thread is interrupted while blocked
*/
public void sendOffsetsToTransaction(Map<TopicPartition, OffsetAndMetadata> offsets,
ConsumerGroupMetadata groupMetadata) throws ProducerFencedException {

How to periodically publish a message to activemq spring integration DSL

I installed Active MQ and periodically say every 10 seconds want to send a message to "my.queue"
I'm struggling to comprehend Spring Integration DSL language.
I need something like
IntegrationFlows.from(every 5 seconds)
.send(message to "my.queue")
Yes, you can do that with Spring Integration Java DSL and its IntegrationFlow abstraction. To make a periodic task you need to use this factory in the IntegrationFlows to start the flow:
/**
* Provides {#link Supplier} as source of messages to the integration flow.
* which will be triggered by a <b>provided</b>
* {#link org.springframework.integration.endpoint.SourcePollingChannelAdapter}.
* #param messageSource the {#link Supplier} to populate.
* #param endpointConfigurer the {#link Consumer} to provide more options for the
* {#link org.springframework.integration.config.SourcePollingChannelAdapterFactoryBean}.
* #param <T> the supplier type.
* #return new {#link IntegrationFlowBuilder}.
* #see Supplier
*/
public static <T> IntegrationFlowBuilder fromSupplier(Supplier<T> messageSource,
Consumer<SourcePollingChannelAdapterSpec> endpointConfigurer) {
The Supplier may return an object you'd like to send as a payload downstream. The second consumer arg can be configured with the:
.poller(p -> p.fixedDelay(1000))
This way every second a message is going to be created from the supplied payload and sent downstream.
To send a message ti Active MQ, you need to use a org.springframework.integration.jms.dsl.Jms and its method for respective channel adapter:
/**
* The factory to produce a {#link JmsOutboundChannelAdapterSpec}.
* #param connectionFactory the JMS ConnectionFactory to build on
* #return the {#link JmsOutboundChannelAdapterSpec} instance
*/
public static JmsOutboundChannelAdapterSpec.JmsOutboundChannelSpecTemplateAware outboundAdapter(
ConnectionFactory connectionFactory) {
The result of this factory has to be used in the DSL callback like:
/**
* Populate a {#link ServiceActivatingHandler} for the provided
* {#link MessageHandler} implementation.
* Can be used as Java 8 Lambda expression:
* <pre class="code">
* {#code
* .handle(m -> logger.info(m.getPayload())
* }
* </pre>
* #param messageHandler the {#link MessageHandler} to use.
* #return the current {#link BaseIntegrationFlowDefinition}.
*/
public B handle(MessageHandler messageHandler) {
All the info is present in the docs: https://docs.spring.io/spring-integration/reference/html/dsl.html#java-dsl
Something like this:
#Bean
public IntegrationFlow jmsPeriodicFlow() {
return IntegrationFlows.fromSupplier(() -> "hello",
e -> e.poller(p -> p.fixedDelay(5000)))
.handle(Jms.outboundAdapter(jmsConnectionFactory())
.destination("my.queue"))
.get();
}

How to disable logging all messages in a Kafka batch in case of an exception?

When using #KafkaListener with batches, the error handler logs the content of the full batch (all messages) in case of an exception.
How can I make this less verbose? I'd like to avoid spamming the log files with all the messages and only see the actual exception.
Here is a minimal example of how my consumer currently looks like:
#Component
class TestConsumer {
#Bean
fun kafkaBatchListenerContainerFactory(kafkaProperties: KafkaProperties): ConcurrentKafkaListenerContainerFactory<String, String> {
val configs = kafkaProperties.buildConsumerProperties()
configs[ConsumerConfig.MAX_POLL_RECORDS_CONFIG] = 10000
val factory = ConcurrentKafkaListenerContainerFactory<String, String>()
factory.consumerFactory = DefaultKafkaConsumerFactory(configs)
factory.isBatchListener = true
return factory
}
#KafkaListener(
topics = ["myTopic"],
containerFactory = "kafkaBatchListenerContainerFactory"
)
fun batchListen(values: List<ConsumerRecord<String, String>>) {
// Something that might throw an exception in rare cases.
}
}
What version are you using?
This container property was added in 2.2.14.
/**
* Set to false to log {#code record.toString()} in log messages instead
* of {#code topic-partition#offset}.
* #param onlyLogRecordMetadata false to log the entire record.
* #since 2.2.14
*/
public void setOnlyLogRecordMetadata(boolean onlyLogRecordMetadata) {
this.onlyLogRecordMetadata = onlyLogRecordMetadata;
}
It has been true by default since version 2.7 (which is why the javadocs now read that way).
This was the previous javadoc:
/**
* Set to true to only log {#code topic-partition#offset} in log messages instead
* of {#code record.toString()}.
* #param onlyLogRecordMetadata true to only log the topic/parrtition/offset.
* #since 2.2.14
*/
Also, starting with version 2.5, you can set the log level on the error handler:
/**
* Set the level at which the exception thrown by this handler is logged.
* #param logLevel the level (default ERROR).
*/
public void setLogLevel(KafkaException.Level logLevel) {
Assert.notNull(logLevel, "'logLevel' cannot be null");
this.logLevel = logLevel;
}

Multiple Kafka Producer Instance for each Http Request

I have a rest end point which can be invoked by multiple users at same time. This rest end point invokes a Transactional Kafka Producer.
What I understand is I cant use same Kafka Producer instance at same time if we use Transaction.
How can I create a new Kafka Producer Instance for each HTTP request efficiently ?
//Kafka Transaction enabled
producerProps.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
producerProps.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, "prod-1-" );
#Service
public class ProducerService {
#Autowired
private KafkaTemplate<Object, Object> kafkaTemplate;
public void postMessage(final MyUser message) {
// wrapping the send method in a transaction
this.kafkaTemplate.executeInTransaction(kafkaTemplate -> {
kafkaTemplate.send("custom", null, message);
}
}
See the javadocs for the DefaultKafkaProducerFactory. It maintains a cache of producers for producer-initiated transactions.
/**
* The {#link ProducerFactory} implementation for a {#code singleton} shared {#link Producer} instance.
* <p>
* This implementation will return the same {#link Producer} instance (if transactions are
* not enabled) for the provided {#link Map} {#code configs} and optional {#link Serializer}
* implementations on each {#link #createProducer()} invocation.
...
* Setting {#link #setTransactionIdPrefix(String)} enables transactions; in which case, a
* cache of producers is maintained; closing a producer returns it to the cache. The
* producers are closed and the cache is cleared when the factory is destroyed, the
* application context stopped, or the {#link #reset()} method is called.
...
*/

How to implement Spring Retry for SocketTimeoutException from Rest Template

I want to use Spring retry functionality in case of 'SocketTimeoutException' from rest template.
but spring Rest template throwing like bellow:
org.springframework.web.client.ResourceAccessException: I/O error: Read timed out; nested exception is java.net.SocketTimeoutException: Read timed out
I have added SocketTimeoutException in Retry Template Map.
Spring retry works only if I add SocketTimeoutException in Retry Template Map or Do I need to add ResourceAccessException also.
You need to use a custom SimpleRetryPolicy that has the traverseCauses option set. Then, instead of just looking at the top level exception, it will examine the cause hierarchy to look for a match.
/**
* Create a {#link SimpleRetryPolicy} with the specified number of retry
* attempts. If traverseCauses is true, the exception causes will be traversed until
* a match is found.
*
* #param maxAttempts the maximum number of attempts
* #param retryableExceptions the map of exceptions that are retryable based on the
* map value (true/false).
* #param traverseCauses is this clause traversable
*/
public SimpleRetryPolicy(int maxAttempts, Map<Class<? extends Throwable>, Boolean> retryableExceptions,
boolean traverseCauses) {
this(maxAttempts, retryableExceptions, traverseCauses, false);
}

Resources