Producer callback in Spring Cloud Stream with reactor core publisher - spring-boot

I have written a spring cloud stream application where producers are publishing messages to the designated kafka topics. My query is how can I add a producer callback to receive ack/confirmation that the message has been successfully published on the topic? Like how we do in spring kafka producer.send(record, new callback { ... }) (maintaining async producer). Below is my code:
private final Sinks.Many<Message<?>> responseProcessor = Sinks.many().multicast().onBackpressureBuffer();
#Bean
public Supplier<Flux<Message<?>>> event() {
return responseProcessor::asFlux;
}
public Message<?> publishEvent(String status) {
try {
String key = ...;
response = MessageBuilder.withPayload(payload)
.setHeader(KafkaHeaders.MESSAGE_KEY, key)
.build();
responseProcessor.tryEmitNext(response);
}
How can I make sure that tryEmitNext has successfully written to the topic?
Is implementing ProducerListener a solution and possible? Couldn't find a concrete solution/documentation in Spring Cloud Stream
UPDATE
I have implemented below now, seems to work as expected
#Component
public class MyProducerListener<K, V> implements ProducerListener<K, V> {
#Override
public void onSuccess(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata) {
// Do nothing on onSuccess
}
#Override
public void onError(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata, Exception exception) {
log.error("Producer exception occurred while publishing message : {}, exception : {}", producerRecord, exception);
}
}
#Bean
ProducerMessageHandlerCustomizer<KafkaProducerMessageHandler<?, ?>> customizer(MyProducerListener pl) {
return (handler, destinationName) -> handler.getKafkaTemplate().setProducerListener(pl);
}

See the Kafka Producer Properties.
recordMetadataChannel
The bean name of a MessageChannel to which successful send results should be sent; the bean must exist in the application context. The message sent to the channel is the sent message (after conversion, if any) with an additional header KafkaHeaders.RECORD_METADATA. The header contains a RecordMetadata object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)
Failed sends go the producer error channel (if configured); see Error Channels. Default: null
You can add a #ServiceActivator to consume from this channel asynchronously.

Related

consumption of events stopped after the consumer throw an exception in spring cloud stream?

I have an aggregation function that aggregates events published into output-channel. I have subscribed to the flux generated by the function like below:
#Component
public class EventsAggregator {
#Autowired
private Sinks.Many<Message<?>> eventsPublisher; // Used to publish events from different places in the code
private final Function<Flux<Message<?>>, Flux<Message<?>>> aggregatorFunction;
#PostConstruct
public void aggregate() {
Flux<Message<?>> output = aggregatorFunction.apply(eventsPublisher.asFlux());
output.subscribe(this::aggregate);
}
public void aggregate(Message<?> aggregatedEventsMessage) {
if (...) {
//some code
} else {
throw new RuntimeException();
}
}
}
If the RuntimeException is thrown, the aggregation function does not work, and I get this message The [bean 'outputChannel'; defined in: 'class path resource [org/springframework/cloud/fn/aggregator/AggregatorFunctionConfiguration.class]'; from source: 'org.springframework.cloud.fn.aggregator.AggregatorFunctionConfiguration.outputChannel()'] doesn't have subscribers to accept messages at org.springframework.util.Assert.state(Assert.java:97)
Is there any way to subscribe to the flux generated by the aggregation function in a safe way?
That's correct. That's how Reactive Streams work: if an exception is thrown, the subscriber is cancelled and no new data can be send to that subscriber anymore.
Consider to not throw that exception up to the stream.
See more in docs: https://docs.spring.io/spring-cloud-stream/docs/4.0.0-SNAPSHOT/reference/html/spring-cloud-stream.html#spring-cloud-stream-overview-error-handling

How to know which exception is thrown from errorhandler in dead letter queue listener?

I have a quorum queue (myQueue) and it's dead letter queue (myDLQueue). We have several exceptions which we separated as Retryable or Fatal. But sometimes in below listener we make an api call that throws RateLimitException. In this case the application should increase both of retry count and retry delay.
#RabbitListener(queues = "#{myQueue.getName()}", errorHandler = "myErrorHandler")
#SendTo("#{myStatusQueue.getName()}")
public Status process(#Payload MyMessage message, #Headers MessageHeaders headers) {
int retries = headerProcessor.getRetries(headers);
if (retries > properties.getMyQueueMaxRetries()) {
throw new RetriesExceededException(retries);
}
if (retries > 0) {
logger.info("Message {} has been retried {} times. Process it again anyway", kv("task_id", message.getTaskId()), retries);
}
// here we send a request to an api. but sometimes api returns rate limit error in case we send too many requests.
// In that case makeApiCall throws RateLimitException which extends RetryableException
makeApiCall() // --> it will throw RateLimitException
if(/* a condition that needs to retry sending the message*/) {
throw new RetryableException()
}
if(/* a condition that should not retry*/){
throw new FatalException()
}
return new Status("Step 1 Success!");
}
I have also an error handler (myErrorHandler) that catches thrown exceptions from above rabbit listener and manages retry process according to the type of the exception.
public class MyErrorHandler implements RabbitListenerErrorHandler {
#Override
public Object handleError(Message amqpMessage,
org.springframework.messaging.Message<?> message,
ListenerExecutionFailedException exception) {
// Check if error is fatal or retryable
if (exception.getCause() /* ..is fatal? */) {
return new Status("FAIL!");
}
// Retryable exception, rethrow it and let message to be NACKed and retried via DLQ
throw exception;
}
}
Last part I have is a DLQHandler that listens dead letter queue messages and send them to original queue (myQueue).
#Service
public class MyDLQueueHandler {
private final MyAppProperties properties;
private final MessageHeaderProcessor headerProcessor;
private final RabbitProducerService rabbitProducerService;
public MyDLQueueHandler(MyProperties properties, MessageHeaderProcessor headerProcessor, RabbitProducerService rabbitProducerService) {
this.properties = properties;
this.headerProcessor = headerProcessor;
this.rabbitProducerService = rabbitProducerService;
}
/**
* Since message TTL is not available with quorum queues manually listen DL Queue and re-send the message with delay.
* This allows messages to be processed again.
*/
#RabbitListener(queues = {"#{myDLQueue.getName()}"}"})
public void handleError(#Payload Object message, #Headers MessageHeaders headers) {
String routingKey = headerProcessor.getRoutingKey(headers);
Map<String, Object> newHeaders = Map.of(
MessageHeaderProcessor.DELAY, properties.getRetryDelay(), // I need to send increased delay in case of RateLimitException.
MessageHeaderProcessor.RETRIES_HEADER, headerProcessor.getRetries(headers) + 1
);
rabbitProducerService.sendMessageDelayed(message, routingKey, newHeaders);
}
}
In the above handleError method inputs there is not any information related to exception instance thrown from MyErrorHandler or MyQueue listener. Currently I have to pass retry delay by reading it from app.properties. But I need to increase this delay if RateLimitException is thrown. So my question is how do I know which error is thrown from MyErrorHandler while in the MyDLQueueHandler?
When you use the normal dead letter mechanism in RabbitMQ, there is no exception information provided - the message is the original rejected message. However, Spring AMQP provides a RepublishMessageRecoverer which can be used in conjunction with a retry interceptor. In that case, exception information is published in headers.
See https://docs.spring.io/spring-amqp/docs/current/reference/html/#async-listeners
The RepublishMessageRecoverer publishes the message with additional information in message headers, such as the exception message, stack trace, original exchange, and routing key. Additional headers can be added by creating a subclass and overriding additionalHeaders().
#Bean
RetryOperationsInterceptor interceptor() {
return RetryInterceptorBuilder.stateless()
.maxAttempts(5)
.recoverer(new RepublishMessageRecoverer(amqpTemplate(), "something", "somethingelse"))
.build();
}
The interceptor is added to the container's advice chain.
https://github.com/spring-projects/spring-amqp/blob/57596c6a26be2697273cd97912049b92e81d3f1a/spring-rabbit/src/main/java/org/springframework/amqp/rabbit/retry/RepublishMessageRecoverer.java#L55-L61
public static final String X_EXCEPTION_STACKTRACE = "x-exception-stacktrace";
public static final String X_EXCEPTION_MESSAGE = "x-exception-message";
public static final String X_ORIGINAL_EXCHANGE = "x-original-exchange";
public static final String X_ORIGINAL_ROUTING_KEY = "x-original-routingKey";
The exception type can be found in the stack trace header.

Multiple #RabbitListeners sending reply to same queue when using sendAndReceive() in producer

I am using SpringBoot with Spring AMQP and I want to use RPC pattern using synchronous sendAndReceive method in producer. My configuration assumes 1 exchange with 2 distinct bindings (1 for each operation on the same resource). I want to send 2 messages with 2 different routingKeys and receive response on distinct reply-to queues
Problem is, as far as I know, sendAndReceive will wait for reply on a queue with name ".replies" so both replies will be sent to products.replies queue (at least that is my understanding).
My publisher config:
#Bean
public DirectExchange productsExchange() {
return new DirectExchange("products");
}
#Bean
public OrderService orderService() {
return new MqOrderService();
}
#Bean
public RabbitTemplate rabbitTemplate(final ConnectionFactory connectionFactory) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
return rabbitTemplate;
}
#Bean
public Jackson2JsonMessageConverter producerJackson2MessageConverter() {
return new Jackson2JsonMessageConverter();
}
and the 2 senders:
...
final Message response = template.sendAndReceive(productsExchange.getName(), "products.get", message);
...
final Message response = template.sendAndReceive(productsExchange.getName(), "products.stock.update", message);
...
consumer config:
#Bean
public Queue getProductQueue() {
return new Queue("getProductBySku");
}
#Bean
public Queue updateStockQueue() {
return new Queue("updateProductStock");
}
#Bean
public DirectExchange exchange() {
return new DirectExchange("products");
}
#Bean
public Binding getProductBinding(DirectExchange exchange) {
return BindingBuilder.bind(getProductQueue())
.to(exchange)
.with("products.get");
}
#Bean
public Binding modifyStockBinding(DirectExchange exchange) {
return BindingBuilder.bind(updateStockQueue())
.to(exchange)
.with("products.stock.update");
}
and #RabbitListeners with following sigratures:
#RabbitListener(queues = "getProductBySku")
public Message getProduct(GetProductResource getProductResource) {...}
#RabbitListener(queues = "updateProductStock")
public Message updateStock(UpdateStockResource updateStockResource) {...}
I noticed that the second sender receives 2 responses, one of which is of invalid type (from first receiver). Is there any way in which I can make these connections distinct? Or is using separate exchange for each operation the only reasonable solution?
as far as I know, sendAndReceive will wait for reply on a queue with name ".replies"
Where did you get that idea?
Depending on which version you are using, either a temporary reply queue will be created for each request or RabbitMQ's "direct reply-to" mechanism is used, which again means each request is replied to on a dedicated pseudo queue called amq.rabbitmq.reply-to.
I don't see any way for one producer to get another's reply; even if you use an explicit reply container (which is generally not necessary any more), the template will correlate the replies to the requests.
Try enabling DEBUG logging to see if provides any hints.

How to dead letter a RabbitMQ messages when an exceptions happens in a service after an aggregator's forceRelease

I am trying to figure out the best way to handle errors that might have occurred in a service that is called after a aggregate's group timeout occurred that mimics the same flow as if the releaseExpression was met.
Here is my setup:
I have a AmqpInboundChannelAdapter that takes in messages and send them to my aggregator.
When the releaseExpression has been met and before the groupTimeout has expired, if an exception gets thrown in my ServiceActivator, the messages get sent to my dead letter queue for all the messages in that MessageGroup. (10 messages in my example below, which is only used for illustrative purposes) This is what I would expect.
If my releaseExpression hasn't been met but the groupTimeout has been met and the group times out, if an exception gets throw in my ServiceActivator, then the messages do not get sent to my dead letter queue and are acked.
After reading another blog post,
link1
it mentions that this happens because the processing happens in another thread by the MessageGroupStoreReaper and not the one that the SimpleMessageListenerContainer was on. Once processing moves away from the SimpleMessageListener's thread, the messages will be auto ack.
I added the configuration mentioned in the link above and see the error messages getting sent to my error handler. My main question, is what is considered the best way to handle this scenario to minimize message getting lost.
Here are the options I was exploring:
Use a BatchRabbitTemplate in my custom error handler to publish the failed messaged to the same dead letter queue that they would have gone to if the releaseExpression was met. (This is the approach I outlined below but I am worried about messages getting lost, if an error happens during publishing)
Investigate if there is away I could let the SimpleMessageListener know about the error that occurred and have it send the batch of messages that failed to a dead letter queue? I doubt this is possible since it seems the messages are already acked.
Don't set the SimpleMessageListenerContainer to AcknowledgeMode.AUTO and manually ack the messages when they get processed via the Service when the releaseExpression being met or the groupTimeOut happening. (This seems kinda of messy, since there can be 1..N message in the MessageGroup but wanted to see what others have done)
Ideally, I want to have a flow that will that will mimic the same flow when the releaseExpression has been met, so that the messages don't get lost.
Does anyone have recommendation on the best way to handle this scenario they have used in the past?
Thanks for any help and/or advice!
Here is my current configuration using Spring Integration DSL
#Bean
public SimpleMessageListenerContainer workListenerContainer() {
SimpleMessageListenerContainer container =
new SimpleMessageListenerContainer(rabbitConnectionFactory);
container.setQueues(worksQueue());
container.setConcurrentConsumers(4);
container.setDefaultRequeueRejected(false);
container.setTransactionManager(transactionManager);
container.setChannelTransacted(true);
container.setTxSize(10);
container.setAcknowledgeMode(AcknowledgeMode.AUTO);
return container;
}
#Bean
public AmqpInboundChannelAdapter inboundRabbitMessages() {
AmqpInboundChannelAdapter adapter = new AmqpInboundChannelAdapter(workListenerContainer());
return adapter;
}
I have defined a error channel and defined my own taskScheduler to use for the MessageStoreRepear
#Bean
public ThreadPoolTaskScheduler taskScheduler(){
ThreadPoolTaskScheduler ts = new ThreadPoolTaskScheduler();
MessagePublishingErrorHandler mpe = new MessagePublishingErrorHandler();
mpe.setDefaultErrorChannel(myErrorChannel());
ts.setErrorHandler(mpe);
return ts;
}
#Bean
public PollableChannel myErrorChannel() {
return new QueueChannel();
}
public IntegrationFlow aggregationFlow() {
return IntegrationFlows.from(inboundRabbitMessages())
.transform(Transformers.fromJson(SomeObject.class))
.aggregate(a->{
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.releaseExpression("size() == 10");
a.transactional(true);
}
)
.handle("someService", "processMessages")
.get();
}
Here is my custom error flow
#Bean
public IntegrationFlow errorResponse() {
return IntegrationFlows.from("myErrorChannel")
.<MessagingException, Message<?>>transform(MessagingException::getFailedMessage,
e -> e.poller(p -> p.fixedDelay(100)))
.channel("myErrorChannelHandler")
.handle("myErrorHandler","handleFailedMessage")
.log()
.get();
}
Here is the custom error handler
#Component
public class MyErrorHandler {
#Autowired
BatchingRabbitTemplate batchingRabbitTemplate;
#ServiceActivator(inputChannel = "myErrorChannelHandler")
public void handleFailedMessage(Message<?> message) {
ArrayList<SomeObject> payload = (ArrayList<SomeObject>)message.getPayload();
payload.forEach(m->batchingRabbitTemplate.convertAndSend("some.dlq","#", m));
}
}
Here is the BatchingRabbitTemplate bean
#Bean
public BatchingRabbitTemplate batchingRabbitTemplate() {
ThreadPoolTaskScheduler scheduler = new ThreadPoolTaskScheduler();
scheduler.setPoolSize(5);
scheduler.initialize();
BatchingStrategy batchingStrategy = new SimpleBatchingStrategy(10, Integer.MAX_VALUE, 30000);
BatchingRabbitTemplate batchingRabbitTemplate = new BatchingRabbitTemplate(batchingStrategy, scheduler);
batchingRabbitTemplate.setConnectionFactory(rabbitConnectionFactory);
return batchingRabbitTemplate;
}
Update 1) to show custom MessageGroupProcessor:
public class CustomAggregtingMessageGroupProcessor extends AbstractAggregatingMessageGroupProcessor {
#Override
protected final Object aggregatePayloads(MessageGroup group, Map<String, Object> headers) {
return group;
}
}
Example Service:
#Slf4j
public class SomeService {
#ServiceActivator
public void processMessages(MessageGroup messageGroup) throws IOException {
Collection<Message<?>> messages = messageGroup.getMessages();
//Do business logic
//ack messages in the group
for (Message<?> m : messages) {
com.rabbitmq.client.Channel channel = (com.rabbitmq.client.Channel)
m.getHeaders().get("amqp_channel");
long deliveryTag = (long) m.getHeaders().get("amqp_deliveryTag");
log.debug(" deliveryTag = {}",deliveryTag);
log.debug("Channel = {}",channel);
channel.basicAck(deliveryTag, false);
}
}
}
Updated integrationFlow
public IntegrationFlow aggregationFlowWithCustomMessageProcessor() {
return IntegrationFlows.from(inboundRabbitMessages()).transform(Transformers.fromJson(SomeObject.class))
.aggregate(a -> {
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.releaseExpression("size() == 10");
a.transactional(true);
a.outputProcessor(new CustomAggregtingMessageGroupProcessor());
}).handle("someService", "processMessages").get();
}
New ErrorHandler to do nack
public class MyErrorHandler {
#ServiceActivator(inputChannel = "myErrorChannelHandler")
public void handleFailedMessage(MessageGroup messageGroup) throws IOException {
if(messageGroup!=null) {
log.debug("Nack messages size = {}", messageGroup.getMessages().size());
Collection<Message<?>> messages = messageGroup.getMessages();
for (Message<?> m : messages) {
com.rabbitmq.client.Channel channel = (com.rabbitmq.client.Channel)
m.getHeaders().get("amqp_channel");
long deliveryTag = (long) m.getHeaders().get("amqp_deliveryTag");
log.debug("deliveryTag = {}",deliveryTag);
log.debug("channel = {}",channel);
channel.basicNack(deliveryTag, false, false);
}
}
}
}
Update 2 Added custom ReleaseStratgedy and change to aggegator
public class CustomMeasureGroupReleaseStratgedy implements ReleaseStrategy {
private static final int MAX_MESSAGE_COUNT = 10;
public boolean canRelease(MessageGroup messageGroup) {
return messageGroup.getMessages().size() >= MAX_MESSAGE_COUNT;
}
}
public IntegrationFlow aggregationFlowWithCustomMessageProcessorAndReleaseStratgedy() {
return IntegrationFlows.from(inboundRabbitMessages()).transform(Transformers.fromJson(SomeObject.class))
.aggregate(a -> {
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.transactional(true);
a.releaseStrategy(new CustomMeasureGroupReleaseStratgedy());
a.outputProcessor(new CustomAggregtingMessageGroupProcessor());
}).handle("someService", "processMessages").get();
}
There are some flaws in your understanding.If you use AUTO, only the last message will be dead-lettered when an exception occurs. Messages successfully deposited in the group, before the release, will be ack'd immediately.
The only way to achieve what you want is to use MANUAL acks.
There is no way to "tell the listener container to send messages to the DLQ". The container never sends messages to the DLQ, it rejects a message and the broker sends it to the DLX/DLQ.

Not able to to filter messages received using condition attribute in Spring Cloud Stream #StreamListener annotation

I am trying to create a event based system for communicating between services using Apache Kafka as Messaging system and Spring Cloud Stream Kafka.
I have written my Receiver class methods as below,
#StreamListener(target = Sink.INPUT, condition = "headers['eventType']=='EmployeeCreatedEvent'")
public void handleEmployeeCreatedEvent(#Payload String payload) {
logger.info("Received EmployeeCreatedEvent: " + payload);
}
This method is specifically to catch for messages or events related to EmployeeCreatedEvent.
#StreamListener(target = Sink.INPUT, condition = "headers['eventType']=='EmployeeTransferredEvent'")
public void handleEmployeeTransferredEvent(#Payload String payload) {
logger.info("Received EmployeeTransferredEvent: " + payload);
}
This method is specifically to catch for messages or events related to EmployeeTransferredEvent.
#StreamListener(target = Sink.INPUT)
public void handleDefaultEvent(#Payload String payload) {
logger.info("Received payload: " + payload);
}
This is the default method.
When I run the application, I am not able to see the methods annoated with condition attribute being called. I only see the handleDefaultEvent method being called.
I am sending a message to this Receiver Application from the Sending/Source App using the below CustomMessageSource class as below,
#Component
#EnableBinding(Source.class)
public class CustomMessageSource {
#Autowired
private Source source;
public void sendMessage(String payload,String eventType) {
Message<String> myMessage = MessageBuilder.withPayload(payload)
.setHeader("eventType", eventType)
.build();
source.output().send(myMessage);
}
}
I am calling the method from my controller in Source App as below,
customMessageSource.sendMessage("Hello","EmployeeCreatedEvent");
The customMessageSource instance is autowired as below,
#Autowired
CustomMessageSource customMessageSource;
Basicaly, I would like to filter messages received by the Sink/Receiver application and handle them accordingly.
For this I have used the #StreamListener annotation with condition attribute to simulate the behaviour of handling different events.
I am using Spring Cloud Stream Chelsea.SR2 version.
Can someone help me in resolving this issue.
It seems like the headers are not propagated. Make sure you include the custom headers in spring.cloud.stream.kafka.binder.headers http://docs.spring.io/autorepo/docs/spring-cloud-stream-docs/Chelsea.SR2/reference/htmlsingle/#_kafka_binder_properties .

Resources