consumption of events stopped after the consumer throw an exception in spring cloud stream? - spring

I have an aggregation function that aggregates events published into output-channel. I have subscribed to the flux generated by the function like below:
#Component
public class EventsAggregator {
#Autowired
private Sinks.Many<Message<?>> eventsPublisher; // Used to publish events from different places in the code
private final Function<Flux<Message<?>>, Flux<Message<?>>> aggregatorFunction;
#PostConstruct
public void aggregate() {
Flux<Message<?>> output = aggregatorFunction.apply(eventsPublisher.asFlux());
output.subscribe(this::aggregate);
}
public void aggregate(Message<?> aggregatedEventsMessage) {
if (...) {
//some code
} else {
throw new RuntimeException();
}
}
}
If the RuntimeException is thrown, the aggregation function does not work, and I get this message The [bean 'outputChannel'; defined in: 'class path resource [org/springframework/cloud/fn/aggregator/AggregatorFunctionConfiguration.class]'; from source: 'org.springframework.cloud.fn.aggregator.AggregatorFunctionConfiguration.outputChannel()'] doesn't have subscribers to accept messages at org.springframework.util.Assert.state(Assert.java:97)
Is there any way to subscribe to the flux generated by the aggregation function in a safe way?

That's correct. That's how Reactive Streams work: if an exception is thrown, the subscriber is cancelled and no new data can be send to that subscriber anymore.
Consider to not throw that exception up to the stream.
See more in docs: https://docs.spring.io/spring-cloud-stream/docs/4.0.0-SNAPSHOT/reference/html/spring-cloud-stream.html#spring-cloud-stream-overview-error-handling

Related

How to know which exception is thrown from errorhandler in dead letter queue listener?

I have a quorum queue (myQueue) and it's dead letter queue (myDLQueue). We have several exceptions which we separated as Retryable or Fatal. But sometimes in below listener we make an api call that throws RateLimitException. In this case the application should increase both of retry count and retry delay.
#RabbitListener(queues = "#{myQueue.getName()}", errorHandler = "myErrorHandler")
#SendTo("#{myStatusQueue.getName()}")
public Status process(#Payload MyMessage message, #Headers MessageHeaders headers) {
int retries = headerProcessor.getRetries(headers);
if (retries > properties.getMyQueueMaxRetries()) {
throw new RetriesExceededException(retries);
}
if (retries > 0) {
logger.info("Message {} has been retried {} times. Process it again anyway", kv("task_id", message.getTaskId()), retries);
}
// here we send a request to an api. but sometimes api returns rate limit error in case we send too many requests.
// In that case makeApiCall throws RateLimitException which extends RetryableException
makeApiCall() // --> it will throw RateLimitException
if(/* a condition that needs to retry sending the message*/) {
throw new RetryableException()
}
if(/* a condition that should not retry*/){
throw new FatalException()
}
return new Status("Step 1 Success!");
}
I have also an error handler (myErrorHandler) that catches thrown exceptions from above rabbit listener and manages retry process according to the type of the exception.
public class MyErrorHandler implements RabbitListenerErrorHandler {
#Override
public Object handleError(Message amqpMessage,
org.springframework.messaging.Message<?> message,
ListenerExecutionFailedException exception) {
// Check if error is fatal or retryable
if (exception.getCause() /* ..is fatal? */) {
return new Status("FAIL!");
}
// Retryable exception, rethrow it and let message to be NACKed and retried via DLQ
throw exception;
}
}
Last part I have is a DLQHandler that listens dead letter queue messages and send them to original queue (myQueue).
#Service
public class MyDLQueueHandler {
private final MyAppProperties properties;
private final MessageHeaderProcessor headerProcessor;
private final RabbitProducerService rabbitProducerService;
public MyDLQueueHandler(MyProperties properties, MessageHeaderProcessor headerProcessor, RabbitProducerService rabbitProducerService) {
this.properties = properties;
this.headerProcessor = headerProcessor;
this.rabbitProducerService = rabbitProducerService;
}
/**
* Since message TTL is not available with quorum queues manually listen DL Queue and re-send the message with delay.
* This allows messages to be processed again.
*/
#RabbitListener(queues = {"#{myDLQueue.getName()}"}"})
public void handleError(#Payload Object message, #Headers MessageHeaders headers) {
String routingKey = headerProcessor.getRoutingKey(headers);
Map<String, Object> newHeaders = Map.of(
MessageHeaderProcessor.DELAY, properties.getRetryDelay(), // I need to send increased delay in case of RateLimitException.
MessageHeaderProcessor.RETRIES_HEADER, headerProcessor.getRetries(headers) + 1
);
rabbitProducerService.sendMessageDelayed(message, routingKey, newHeaders);
}
}
In the above handleError method inputs there is not any information related to exception instance thrown from MyErrorHandler or MyQueue listener. Currently I have to pass retry delay by reading it from app.properties. But I need to increase this delay if RateLimitException is thrown. So my question is how do I know which error is thrown from MyErrorHandler while in the MyDLQueueHandler?
When you use the normal dead letter mechanism in RabbitMQ, there is no exception information provided - the message is the original rejected message. However, Spring AMQP provides a RepublishMessageRecoverer which can be used in conjunction with a retry interceptor. In that case, exception information is published in headers.
See https://docs.spring.io/spring-amqp/docs/current/reference/html/#async-listeners
The RepublishMessageRecoverer publishes the message with additional information in message headers, such as the exception message, stack trace, original exchange, and routing key. Additional headers can be added by creating a subclass and overriding additionalHeaders().
#Bean
RetryOperationsInterceptor interceptor() {
return RetryInterceptorBuilder.stateless()
.maxAttempts(5)
.recoverer(new RepublishMessageRecoverer(amqpTemplate(), "something", "somethingelse"))
.build();
}
The interceptor is added to the container's advice chain.
https://github.com/spring-projects/spring-amqp/blob/57596c6a26be2697273cd97912049b92e81d3f1a/spring-rabbit/src/main/java/org/springframework/amqp/rabbit/retry/RepublishMessageRecoverer.java#L55-L61
public static final String X_EXCEPTION_STACKTRACE = "x-exception-stacktrace";
public static final String X_EXCEPTION_MESSAGE = "x-exception-message";
public static final String X_ORIGINAL_EXCHANGE = "x-original-exchange";
public static final String X_ORIGINAL_ROUTING_KEY = "x-original-routingKey";
The exception type can be found in the stack trace header.

Spring cloud function Function interface return success/failure handling

I currently have a spring cloud stream application that has a listener function that mainly listens to a certain topic and executes the following in sequence:
Consume messages from a topic
Store consumed message in the DB
Call an external service for some information
Process the data
Record the results in DB
Send the message to another topic
Acknowledge the message (I have the acknowledge mode set to manual)
We have decided to move to Spring cloud function, and I have been already able to already do almost all the steps above using the Function interface, with the source topic as input and the sink topic as an output.
#Bean
public Function<Message<NotificationMessage>, Message<ValidatedEvent>> validatedProducts() {
return message -> {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
return MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
}
}
My problem goes with exception handling in step 7 (Acknowledge the message). We only acknowledge the message if we are sure that it was sent successfully to the sink queue, otherwise we do no acknowledge the message.
My question is, how can such a thing be implemented within Spring cloud function, specially that the send method is fully dependant on the Spring Framework (as the result of the function interface implementation evaluation).
earlier, we could do this through try/catch
#StreamListener(value = NotificationMesage.INPUT)
public void onMessage(Message<NotificationMessage> message) {
try {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
Message message = MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
kafkaTemplate.send(message);
notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
}catch (Exception exception){
notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
}
}
Is there a listener that triggers after the Function interface have returned successfully, something like KafkaSendCallback but without specifying a template
Building upon what Oleg mentioned above, if you want to strictly restore the behavior in your StreamListener code, here is something you can try. Instead of using a function, you can switch to a consumer and then use KafkaTemplate to send on the outbound as you had previously.
#Bean
public Consumer<Message<NotificationMessage>> validatedProducts() {
return message -> {
try{
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
Message message = MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
kafkaTemplate.send(message); //here, you make sure that the data was sent successfully by using some callback.
//only ack if the data was sent successfully.
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
}
catch (Exception exception){
notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
}
};
}
Another thing that is worth looking into is using Kafka transactions, in which case if it doesn't work end-to-end, no acknowledgment will happen. Spring Cloud Stream binder has support for this based on the foundations in Spring for Apache Kafka. More details here. Here is the Spring Cloud Stream doc on this.
Spring cloud stream has no knowledge of function. It is just the same message handler as it was before, so the same approach with callback as you used before would work with functions. So perhaps you can share some code that could clarify what you mean? I also don't understand what do you mean by ..send method is fully dependant on the Spring Framework..
Alright, So what I opted in was actually not to use KafkaTemplate (Or streamBridge)for that matter. While it is a feasible solution it would mean that my Function is going to be split into Consumer and some sort of an improvised supplied (the KafkaTemplate in this case).
As I wanted to adhere to the design goals of the functional interface, I have isolated the behaviour for Database update in a ProducerListener interface implementation
#Configuration
public class ProducerListenerConfiguration {
private final MongoTemplate mongoTemplate;
public ProducerListenerConfiguration(MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#Bean
public ProducerListener myProducerListener() {
return new ProducerListener() {
#SneakyThrows
#Override
public void onSuccess(ProducerRecord producerRecord, RecordMetadata recordMetadata) {
final ValidatedEvent event = new ObjectMapper().readerFor(ValidatedEvent.class).readValue((byte[]) producerRecord.value());
final var updateResult = updateDocumentProcessedState(event.getKey(), event.getPayload().getVersion(), true);
}
#SneakyThrows
#Override
public void onError(ProducerRecord producerRecord, #Nullable RecordMetadata recordMetadata, Exception exception) {
ProducerListener.super.onError(producerRecord, recordMetadata, exception);
}
};
}
public UpdateResult updateDocumentProcessedState(String id, long version, boolean isProcessed) {
Query query = new Query();
query.addCriteria(Criteria.where("_id").is(id));
Update update = new Update();
update.set("processed", isProcessed);
update.set("version", version);
return mongoTemplate.updateFirst(query, update, ProductChangedEntity.class);
}
}
Then with each successful attempt, the DB is updated with the processing result and the updated version number.

Producer callback in Spring Cloud Stream with reactor core publisher

I have written a spring cloud stream application where producers are publishing messages to the designated kafka topics. My query is how can I add a producer callback to receive ack/confirmation that the message has been successfully published on the topic? Like how we do in spring kafka producer.send(record, new callback { ... }) (maintaining async producer). Below is my code:
private final Sinks.Many<Message<?>> responseProcessor = Sinks.many().multicast().onBackpressureBuffer();
#Bean
public Supplier<Flux<Message<?>>> event() {
return responseProcessor::asFlux;
}
public Message<?> publishEvent(String status) {
try {
String key = ...;
response = MessageBuilder.withPayload(payload)
.setHeader(KafkaHeaders.MESSAGE_KEY, key)
.build();
responseProcessor.tryEmitNext(response);
}
How can I make sure that tryEmitNext has successfully written to the topic?
Is implementing ProducerListener a solution and possible? Couldn't find a concrete solution/documentation in Spring Cloud Stream
UPDATE
I have implemented below now, seems to work as expected
#Component
public class MyProducerListener<K, V> implements ProducerListener<K, V> {
#Override
public void onSuccess(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata) {
// Do nothing on onSuccess
}
#Override
public void onError(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata, Exception exception) {
log.error("Producer exception occurred while publishing message : {}, exception : {}", producerRecord, exception);
}
}
#Bean
ProducerMessageHandlerCustomizer<KafkaProducerMessageHandler<?, ?>> customizer(MyProducerListener pl) {
return (handler, destinationName) -> handler.getKafkaTemplate().setProducerListener(pl);
}
See the Kafka Producer Properties.
recordMetadataChannel
The bean name of a MessageChannel to which successful send results should be sent; the bean must exist in the application context. The message sent to the channel is the sent message (after conversion, if any) with an additional header KafkaHeaders.RECORD_METADATA. The header contains a RecordMetadata object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)
Failed sends go the producer error channel (if configured); see Error Channels. Default: null
You can add a #ServiceActivator to consume from this channel asynchronously.

Spring Boot Exception(Error) Handling for RESTful Services

I have the following RESTful Services method :
#PostMapping("/ajouterNewField")
public String ajouterField(#Valid #ModelAttribute("field") Fields field, Model model) throws IOException {
fieldDao.save(field);
// SOME CODE
return displayListeChamps( model);
}
The method is working fine and my question is how to handle any error (database not connected ...) or every issue that can happen durring the execution of this RESTful Services method.
You can use #ControllerAdvice
Refer to the code below
#ControllerAdvice
public String NyExceptionHandlerAdvice {
private final Logger logger = ...;
#ResponseStatus(HttpStatus.INTERNAL_SERVER_ERROR)
#ExceptionHandler({MyRunTimeException.class})
public void handleMyRunTimeException(Exception e) {
logger.error("Exception : ", e);
}
return MY_ERROR_STRING;
}
Best Practice is:
You can have your code throw RunTimeExceptions and handle all of them together or separately in handler methods similar to handleMyRunTimeException above.
You can decide what status code your request should return upon exception.
Basically you'll have to a sort of exception handler for any kind of exception your method might throw:
public class FooController{
// ...
#ExceptionHandler({ CustomException1.class, CustomException2.class })
public void handleException() {
//
}
}
Here's a nice article about that: https://www.baeldung.com/exception-handling-for-rest-with-spring

Not able to to filter messages received using condition attribute in Spring Cloud Stream #StreamListener annotation

I am trying to create a event based system for communicating between services using Apache Kafka as Messaging system and Spring Cloud Stream Kafka.
I have written my Receiver class methods as below,
#StreamListener(target = Sink.INPUT, condition = "headers['eventType']=='EmployeeCreatedEvent'")
public void handleEmployeeCreatedEvent(#Payload String payload) {
logger.info("Received EmployeeCreatedEvent: " + payload);
}
This method is specifically to catch for messages or events related to EmployeeCreatedEvent.
#StreamListener(target = Sink.INPUT, condition = "headers['eventType']=='EmployeeTransferredEvent'")
public void handleEmployeeTransferredEvent(#Payload String payload) {
logger.info("Received EmployeeTransferredEvent: " + payload);
}
This method is specifically to catch for messages or events related to EmployeeTransferredEvent.
#StreamListener(target = Sink.INPUT)
public void handleDefaultEvent(#Payload String payload) {
logger.info("Received payload: " + payload);
}
This is the default method.
When I run the application, I am not able to see the methods annoated with condition attribute being called. I only see the handleDefaultEvent method being called.
I am sending a message to this Receiver Application from the Sending/Source App using the below CustomMessageSource class as below,
#Component
#EnableBinding(Source.class)
public class CustomMessageSource {
#Autowired
private Source source;
public void sendMessage(String payload,String eventType) {
Message<String> myMessage = MessageBuilder.withPayload(payload)
.setHeader("eventType", eventType)
.build();
source.output().send(myMessage);
}
}
I am calling the method from my controller in Source App as below,
customMessageSource.sendMessage("Hello","EmployeeCreatedEvent");
The customMessageSource instance is autowired as below,
#Autowired
CustomMessageSource customMessageSource;
Basicaly, I would like to filter messages received by the Sink/Receiver application and handle them accordingly.
For this I have used the #StreamListener annotation with condition attribute to simulate the behaviour of handling different events.
I am using Spring Cloud Stream Chelsea.SR2 version.
Can someone help me in resolving this issue.
It seems like the headers are not propagated. Make sure you include the custom headers in spring.cloud.stream.kafka.binder.headers http://docs.spring.io/autorepo/docs/spring-cloud-stream-docs/Chelsea.SR2/reference/htmlsingle/#_kafka_binder_properties .

Resources