spring kafka consumer with circuit breaker functionality using Resilience4j library - spring-boot

I am trying to implement the spring kafka consumers which needs to be paused after a certain exception while processing the event (ex: while storing the event info to DB, DB is down).
How do we handle this scenario using Resilience4j circuit breaker approach with spring boot - 2.3.8 (spring kafka)
Looking for some examples on the consumer to pause and resume also.
#Component
public class CircuitBreakerManager {
private CircuitBreaker circuitBreaker;
#Autowired
private KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
public CircuitBreakerManager() {
CircuitBreakerConfig circuitBreakerConfig = CircuitBreakerConfig.custom()
.slidingWindowType(CircuitBreakerConfig.SlidingWindowType.COUNT_BASED)
.enableAutomaticTransitionFromOpenToHalfOpen()
.minimumNumberOfCalls(5)
.permittedNumberOfCallsInHalfOpenState(3)
.slidingWindowSize(10)
.failureRateThreshold(50)
.slowCallRateThreshold(60.0f)
.slowCallDurationThreshold(Duration.ofSeconds(3))
.build();
CircuitBreakerRegistry registry = CircuitBreakerRegistry.of(circuitBreakerConfig);
this.circuitBreaker = registry.circuitBreaker("serialization_exception");
this.circuitBreaker.getEventPublisher().onStateTransition(this::onStateChange);
}
private void onStateChange(CircuitBreakerOnStateTransitionEvent circuitBreakerEvent) {
CircuitBreaker.State toState = circuitBreakerEvent.getStateTransition()
.getToState();
System.out.println("Change in Circuit Breaker state " + toState);
switch (toState) {
case OPEN:
kafkaListenerEndpointRegistry.getListenerContainer("my_listener_id").stop();
break;
case CLOSED:
break;
case HALF_OPEN:
kafkaListenerEndpointRegistry.getListenerContainer("my_listener_id").start();
break;
}
}
}
At kafka listerner Just wanted to catch the parse error.if we get more than 5 parsing errors , the listener needs to be stopped. But i am not sure how the circuit breaker will get triggered.
#CircuitBreaker(name = RESILIENCE4J_INSTANCE_NAME)
private Event getParsedEvent(ConsumerRecord consumerRecord) {
Event event = getEvent(consumerRecord);
if (StringUtils.isEmpty(event)) {
throw new RuntimeException("Serialization Exception occurred");
}
}
return event;
}

See Pausing and Resuming Listener Containers.
Note that pause won't take effect until all the records returned from the current poll have been processed (or an exception is thrown by the listener, as long as the default error handler is in place).

Related

Transactional kafka listener retry

I'm trying to create a Spring Kafka #KafkaListener which is both transactional (kafa and database) and uses retry. I am using Spring Boot. The documentation for error handlers says that
When transactions are being used, no error handlers are configured, by default, so that the exception will roll back the transaction. Error handling for transactional containers are handled by the AfterRollbackProcessor. If you provide a custom error handler when using transactions, it must throw an exception if you want the transaction rolled back (source).
However, when I configure my listener with a #Transactional("kafkaTransactionManager) annotation, even though I can clearly see that the template rolls back produced messages when an exception is raised, the container actually uses a non-null commonErrorHandler rather than an AfterRollbackProcessor. This is the case even when I explicitly configure the commonErrorHandler to null in the container factory. I do not see any evidence that my configured AfterRollbackProcessor is ever invoked, even after the commonErrorHandler exhausts its retry policy.
I'm uncertain how Spring Kafka's error handling works in general at this point, and am looking for clarification. The questions I want to answer are:
What is the recommended way to configure transactional kafka listeners with Spring-Kafka 2.8.0? Have I done it correctly?
Should the common error handler indeed be used rather than the after rollback processor? Does it rollback the current transaction before trying to process the message again according to the retry policy?
In general, when I have a transactional kafka listener, is there ever more than one layer of error handling I should be aware of? E.g. if my common error handler re-throws exceptions of kind T, will another handler catch that and potentially start retry of its own?
Thanks!
My code:
#Configuration
#EnableScheduling
#EnableKafka
public class KafkaConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaConfiguration.class);
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConsumerFactory<Object, Object> consumerFactory) {
var factory = new ConcurrentKafkaListenerContainerFactory<Integer, Object>();
factory.setConsumerFactory(consumerFactory);
var afterRollbackProcessor =
new DefaultAfterRollbackProcessor<Object, Object>(
(record, e) -> LOGGER.info("After rollback processor triggered! {}", e.getMessage()),
new FixedBackOff(1_000, 1));
// Configures different error handling for different listeners.
factory.setContainerCustomizer(
container -> {
var groupId = container.getContainerProperties().getGroupId();
if (groupId.equals("InputProcessorHigh") || groupId.equals("InputProcessorLow")) {
container.setAfterRollbackProcessor(afterRollbackProcessor);
// If I set commonErrorHandler to null, it is defaulted instead.
}
});
return factory;
}
}
#Component
public class InputProcessor {
private static final Logger LOGGER = LoggerFactory.getLogger(InputProcessor.class);
private final KafkaTemplate<Integer, Object> template;
private final AuditLogRepository repository;
#Autowired
public InputProcessor(KafkaTemplate<Integer, Object> template, AuditLogRepository repository) {
this.template = template;
this.repository = repository;
}
#KafkaListener(id = "InputProcessorHigh", topics = "input-high", concurrency = "3")
#Transactional("kafkaTransactionManager")
public void inputHighProcessor(ConsumerRecord<Integer, Input> input) {
processInputs(input);
}
#KafkaListener(id = "InputProcessorLow", topics = "input-low", concurrency = "1")
#Transactional("kafkaTransactionManager")
public void inputLowProcessor(ConsumerRecord<Integer, Input> input) {
processInputs(input);
}
public void processInputs(ConsumerRecord<Integer, Input> input) {
var key = input.key();
var message = input.value().getMessage();
var output = new Output().setMessage(message);
LOGGER.info("Processing {}", message);
template.send("output-left", key, output);
repository.createIfNotExists(message); // idempotent insert
template.send("output-right", key, output);
if (message.contains("ERROR")) {
throw new RuntimeException("Simulated processing error!");
}
}
}
My application.yaml (minus my bootstrap-servers and security config):
spring:
kafka:
consumer:
auto-offset-reset: 'earliest'
key-deserializer: 'org.apache.kafka.common.serialization.IntegerDeserializer'
value-deserializer: 'org.springframework.kafka.support.serializer.JsonDeserializer'
isolation-level: 'read_committed'
properties:
spring.json.trusted.packages: 'java.util,java.lang,com.github.tomboyo.silverbroccoli.*'
producer:
transaction-id-prefix: 'tx-'
key-serializer: 'org.apache.kafka.common.serialization.IntegerSerializer'
value-serializer: 'org.springframework.kafka.support.serializer.JsonSerializer'
[EDIT] (solution code)
I was able to figure it out with Gary's help. As they say, we need to set the kafka transaction manager on the container so that the container can start transactions. The transactions documentation doesn't cover how to do this, and there are a few ways. First, we can get the mutable container properties object from the factory and set the transaction manager on that:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
var factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.getContainerProperties().setTransactionManager(...);
return factory;
}
If we are in Spring Boot, we can re-use some of the auto configuration to set sensible defaults on our factory before we customize it. We can see that the KafkaAutoConfiguration module imports KafkaAnnotationDrivenConfiguration, which produces a ConcurrentKafkaListenerContainerFactoryConfigurer bean. This appears to be responsible for all the default configuration in a Spring-Boot application. So, we can inject that bean and use it to initialize our factory before adding customizations:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer bootConfigurer,
ConsumerFactory<Object, Object> consumerFactory) {
var factory = new ConcurrentKafkaListenerContainerFactory<Object, Object>();
// Apply default spring-boot configuration.
bootConfigurer.configure(factory, consumerFactory);
factory.setContainerCustomizer(
container -> {
... // do whatever
});
return factory;
}
Once that's done, the container uses the AfterRollbackProcessor for error handling, as expected. As long as I don't explicitly configure a common error handler, this appears to be the only layer of exception handling.
The AfterRollbackProcessor is only used when the container knows about the transaction; you must provide a KafkaTransactionManager to the container so that the kafka transaction is started by the container, and the offsets sent to the transaction. Using #Transactional is not the correct way to start a Kafka Transaction.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#transactions

Spring cloud function Function interface return success/failure handling

I currently have a spring cloud stream application that has a listener function that mainly listens to a certain topic and executes the following in sequence:
Consume messages from a topic
Store consumed message in the DB
Call an external service for some information
Process the data
Record the results in DB
Send the message to another topic
Acknowledge the message (I have the acknowledge mode set to manual)
We have decided to move to Spring cloud function, and I have been already able to already do almost all the steps above using the Function interface, with the source topic as input and the sink topic as an output.
#Bean
public Function<Message<NotificationMessage>, Message<ValidatedEvent>> validatedProducts() {
return message -> {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
return MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
}
}
My problem goes with exception handling in step 7 (Acknowledge the message). We only acknowledge the message if we are sure that it was sent successfully to the sink queue, otherwise we do no acknowledge the message.
My question is, how can such a thing be implemented within Spring cloud function, specially that the send method is fully dependant on the Spring Framework (as the result of the function interface implementation evaluation).
earlier, we could do this through try/catch
#StreamListener(value = NotificationMesage.INPUT)
public void onMessage(Message<NotificationMessage> message) {
try {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
Message message = MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
kafkaTemplate.send(message);
notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
}catch (Exception exception){
notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
}
}
Is there a listener that triggers after the Function interface have returned successfully, something like KafkaSendCallback but without specifying a template
Building upon what Oleg mentioned above, if you want to strictly restore the behavior in your StreamListener code, here is something you can try. Instead of using a function, you can switch to a consumer and then use KafkaTemplate to send on the outbound as you had previously.
#Bean
public Consumer<Message<NotificationMessage>> validatedProducts() {
return message -> {
try{
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
Message message = MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
kafkaTemplate.send(message); //here, you make sure that the data was sent successfully by using some callback.
//only ack if the data was sent successfully.
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
}
catch (Exception exception){
notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
}
};
}
Another thing that is worth looking into is using Kafka transactions, in which case if it doesn't work end-to-end, no acknowledgment will happen. Spring Cloud Stream binder has support for this based on the foundations in Spring for Apache Kafka. More details here. Here is the Spring Cloud Stream doc on this.
Spring cloud stream has no knowledge of function. It is just the same message handler as it was before, so the same approach with callback as you used before would work with functions. So perhaps you can share some code that could clarify what you mean? I also don't understand what do you mean by ..send method is fully dependant on the Spring Framework..
Alright, So what I opted in was actually not to use KafkaTemplate (Or streamBridge)for that matter. While it is a feasible solution it would mean that my Function is going to be split into Consumer and some sort of an improvised supplied (the KafkaTemplate in this case).
As I wanted to adhere to the design goals of the functional interface, I have isolated the behaviour for Database update in a ProducerListener interface implementation
#Configuration
public class ProducerListenerConfiguration {
private final MongoTemplate mongoTemplate;
public ProducerListenerConfiguration(MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#Bean
public ProducerListener myProducerListener() {
return new ProducerListener() {
#SneakyThrows
#Override
public void onSuccess(ProducerRecord producerRecord, RecordMetadata recordMetadata) {
final ValidatedEvent event = new ObjectMapper().readerFor(ValidatedEvent.class).readValue((byte[]) producerRecord.value());
final var updateResult = updateDocumentProcessedState(event.getKey(), event.getPayload().getVersion(), true);
}
#SneakyThrows
#Override
public void onError(ProducerRecord producerRecord, #Nullable RecordMetadata recordMetadata, Exception exception) {
ProducerListener.super.onError(producerRecord, recordMetadata, exception);
}
};
}
public UpdateResult updateDocumentProcessedState(String id, long version, boolean isProcessed) {
Query query = new Query();
query.addCriteria(Criteria.where("_id").is(id));
Update update = new Update();
update.set("processed", isProcessed);
update.set("version", version);
return mongoTemplate.updateFirst(query, update, ProductChangedEntity.class);
}
}
Then with each successful attempt, the DB is updated with the processing result and the updated version number.

Kafka Consumer with Circuit Breaker, Retry Patterns using Resilience4j

I need some help in understanding how I can come up with a solution using Spring boot, Kafka, Resilence4J to achieve a microservice call from my Kafka Consumer. Let's say if the Microservice is down then I need to notify my Kafka consumer using a circuit breaker pattern to stop fetching the messages/events until the Microservice is up and running.
With Spring Kafka, you could use the pause and resume methods depending on the CircuitBreaker state transitions. The best way I found for this is to define it as "supervisor" with an #Configuration Annotation. Resilience4j is also used.
#Configuration
public class CircuitBreakerConsumerConfiguration {
public CircuitBreakerConsumerConfiguration(CircuitBreakerRegistry circuitBreakerRegistry, KafkaManager kafkaManager) {
circuitBreakerRegistry.circuitBreaker("yourCBName").getEventPublisher().onStateTransition(event -> {
switch (event.getStateTransition()) {
case CLOSED_TO_OPEN:
case CLOSED_TO_FORCED_OPEN:
case HALF_OPEN_TO_OPEN:
kafkaManager.pause();
break;
case OPEN_TO_HALF_OPEN:
case HALF_OPEN_TO_CLOSED:
case FORCED_OPEN_TO_CLOSED:
case FORCED_OPEN_TO_HALF_OPEN:
kafkaManager.resume();
break;
default:
throw new IllegalStateException("Unknown transition state: " + event.getStateTransition());
}
});
}
}
This is what I used in combination with a KafkaManager annotated with #Component.
#Component
public class KafkaManager {
private final KafkaListenerEndpointRegistry registry;
public KafkaManager(KafkaListenerEndpointRegistry registry) {
this.registry = registry;
}
public void pause() {
registry.getListenerContainers().forEach(MessageListenerContainer::pause);
}
public void resume() {
registry.getListenerContainers().forEach(MessageListenerContainer::resume);
}
}
In addition my consumer service looks like this:
#KafkaListener(topics = "#{'${topic.name}'}", concurrency = "1", id = "CBListener")
public void receive(final ConsumerRecord<String, ReplayData> replayData, Acknowledgment acknowledgment) throws
Exception {
try {
httpClientServiceCB.receiveHandleCircuitBreaker(replayData);
acknowledgement.acknowledge();
} catch (Exception e) {
acknowledgment.nack(1000);
}
}
And the #CircuitBreaker Annotation:
#CircuitBreaker(name = "yourCBName")
public void receiveHandleCircuitBreaker(ConsumerRecord<String, ReplayData> replayData) throws
Exception {
try {
String response = restTemplate.getForObject("http://localhost:8081/item", String.class);
} catch (Exception e ) {
// throwing the exception is needed to trigger the Circuit Breaker state change
throw new Exception();
}
}
And this is additionally supplemented by the following application.properties
resilience4j.circuitbreaker.instances.yourCBName.failure-rate-threshold=80
resilience4j.circuitbreaker.instances.yourCBName.sliding-window-type=COUNT_BASED
resilience4j.circuitbreaker.instances.yourCBName.sliding-window-size=5
resilience4j.circuitbreaker.instances.yourCBName.wait-duration-in-open-state=10000
resilience4j.circuitbreaker.instances.yourCBName.automatic-transition-from-open-to-half-open-enabled=true
spring.kafka.consumer.enable.auto.commit = false
spring.kafka.listener.ack-mode = MANUAL_IMMEDIATE
Also have a look at https://resilience4j.readme.io/docs/circuitbreaker
If you are using Spring Kafka, you could maybe use the pause and resume methods of the ConcurrentMessageListenerContainer class.
You can attach an EventListener to the CircuitBreaker which listens on state transitions and pauses or resumes processing of events. Inject the CircuitBreakerRegistry into you bean:
circuitBreakerRegistry.circuitBreaker("yourCBName").getEventPublisher().onStateTransition(
event -> {
switch (event.getStateTransition()) {
case CLOSED_TO_OPEN:
container.pause();
case OPEN_TO_HALF_OPEN:
container.resume();
case HALF_OPEN_TO_CLOSED:
container.resume();
case HALF_OPEN_TO_OPEN:
container.pause();
case CLOSED_TO_FORCED_OPEN:
container.pause();
case FORCED_OPEN_TO_CLOSED:
container.resume();
case FORCED_OPEN_TO_HALF_OPEN:
container.resume();
default:
}
}
);

Spring cloud stream - notification when Kafka binder is initialized

I have a simple Kafka producer in my spring cloud stream application. As my spring application starts, I have a #PostConstruct method which performs some reconciliation and tries sending events to the Kafka producer.
Issue is, my Kafka Producer is not yet ready when the reconciliation starts sending the enets into it, leading to the below:
org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'orderbook-service-1.orderbook'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage ..
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:445)
Is there is a way to get a notification during my application's startup that Kafka channel is initialized, so that I only kick off the rec job post it.
Here is my code snippets:
public interface OrderEventChannel {
String TOPIC_BINDING = "orderbook";
#Output(TOPIC_BINDING)
SubscribableChannel outboundEvent();
}
#Configuration
#EnableBinding({OrderEventChannel.class})
#ConditionalOnExpression("${aix.core.stream.outgoing.kafka.enabled:false}")
public class OutgoingKafkaConfiguration {
}
#Service
public class OutgoingOrderKafkaProducer {
#Autowired
private OrderEventChannel orderEventChannel;
public void onOrderEvent( ClientEvent clientEvent ) {
try {
Message<KafkaEvent> kafkaMsg = mapToKafkaMessage( clientEvent );
SubscribableChannel subscribableChannel = orderEventChannel.outboundEvent();
subscribableChannel.send( kafkaMsg );
} catch ( RuntimeException rte ) {
log.error( "Error while publishing Kafka event [{}]", clientEvent, rte );
}
}
..
..
}
#PostConstruct is MUCH too early in the context lifecycle to start using beans; they are still being created, configured and wired together.
You can use an ApplicationListener (or #EventListener) to listen for an ApplicationReadyEvent (be sure to compare the even's applicationContext to the main application context because you may get other events).
You can also implement SmartLifecycle and put your code in start(); put your bean in a late Phase so it is started after everything is wired up.
Output bindings are started in phase Integer.MIN_VALUE + 1000, input bindings are started in phase Integer.MAX_VALUE - 1000.
So if you want to do something before messages start flowing, use a phase in-between these (e.g. 0, which is the default).

How to stop and restart consuming message from the RabbitMQ with #RabbitListener

I am able to stop the consuming and restart the consuming but the problem is that when I am restarting the consuming, I am able to process the already published message but when I publish the new messages those are not able to process.
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Consumer;
#Component
public class RabbitMqueue implements Consumer {
int count = 0;
#RabbitListener(queues="dataQueue")
public void receivedData(#Payload Event msg, Channel channel,
#Header(AmqpHeaders.CONSUMER_TAG) String tag) throws IOException,
InterruptedException {
count++;
System.out.println("\n Message recieved from the Dataqueue is " + msg);
//Canceling consuming working fine.
if(count == 1) {
channel.basicCancel(tag);
System.out.println("Consumer is cancle");
}
count++;
System.out.println("\n count is " + count + "\n");
Thread.sleep(5000);
//restarting consumer. able to process already consumed messages
//but not able to see the newly published messages to the queue I mean
//newly published message is moving from ready to unack state but nothing
//happening on the consumer side.
if(count == 2) {
channel.basicConsume("dataQueue", this);
System.out.println("Consumer is started ");
}
}
}
You must not do this channel.basicCancel(tag).
The channel/consumer are managed by Spring; the only thing you should do with the consumer argument is ack or nack messages (and even that is rarely needed - it's better to let the container do the acks).
To stop/start the consumer, use the endpoint registry as described in the documentation.
Containers created for annotations are not registered with the application context. You can obtain a collection of all containers by invoking getListenerContainers() on the RabbitListenerEndpointRegistry bean. You can then iterate over this collection, for example, to stop/start all containers or invoke the Lifecycle methods on the registry itself which will invoke the operations on each container.
e.g. registry.stop() will stop all the listeners.
You can also get a reference to an individual container using its id, using getListenerContainer(String id); for example registry.getListenerContainer("multi") for the container created by the snippet above.
If your are using AMQP/Rabbit, you can try one of these:
1) Prevent starting at startup in code:
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
//
//autoStartup = false, prevents handling messages immedeatly. You need to start each listener itselve.
//
factory.setAutoStartup(false);
factory.setMessageConverter(new Jackson2JsonMessageConverter());
return factory;
}
2) Prevent starting at startup in in app.yml/props:
rabbitmq.listener.auto-startup: false
rabbitmq.listener.simple.auto-startup: false
3) Start/stop individual listeners
give your #RabbitListener a id:
#RabbitListener(queues = "myQ", id = "myQ")
...
and :
#Autowired
private RabbitListenerEndpointRegistry rabbitListenerEndpointRegistry;
MessageListenerContainer listener =
rabbitListenerEndpointRegistry.getListenerContainer("myQ");
...
listener.start();
...
listener.stop();

Resources