Transactions in MessageListener from mongodb - spring

Hello I am using DefaultMessageListenerContainer from spring mongodb core. And I wonder what happens if error occurs during processing message in messageListener? Is there a way I can implement transactions to force retry processing messages in case of errors? Thank you

Related

How to retry consuming message, then stop consuming, when error occurs in listener

I have Kafka listener writing data to a database, in case of database timeout (JdbcException), I'd like to retry, and if timeouts persist, stop consuming Kafka message.
As far as I understand, Spring Kafka 2.9 has 2 CommonErrorHandler implementations:
DefaultErrorHandler tries to redeliver messages several times and then send failed messages to logs or DLQ
CommonContainerStoppingErrorHandler stops listener container and message consumption
I would like to chain both of them: first try to redeliver messages several times, then stop container when delivery doesn't succeed.
How can I do that?
Use a DefaultErrorHandler with a custom recoverer that calls a CommonContainerStoppingErrorHandler after the retries are exhausted.
See this answer for an example
How do you exit spring boot application programmatically when retries are exhausted, to prevent kafka offset commit
(It uses the older SeekToCurrentErrorHandler, but the same concept applies.)

Spring Kafka with Spring JPA

I have two micro services A and B.
A service is sending message to Kafka topic "A-Topic". B service is consuming the message.
In the B service, kafka listener will do the below steps
1. Persist the data in the database (repo.save(entity))
2. Publish the response message to "B-Topic". (kafkatemplte.send("B-Topic",message))
I am using the Spring #Transactional annotation at the service level in both services.
In the success scenario, data is getting persisted and success message is published to the topic only once
where as
in the Failure scenario, database save was failed due to integrity constraint violation issue. In this case, failure message is published to the Kafka 10 times continuously.
If I remove Transactional annotation from the service class then the message is published only once in failure scenario also.
I don't understand, how come transactional annotation is causing the message to be published 10 times to kafka.
Please let me know your inputs.
Thanks in advance.
The default error handler will attempt delivery 10 times; you need to enable Kafka transactions in the listener container so the kafka sends will be rolled back (and the consumer on B-Topic needs isolation.level=read_committed).
https://docs.spring.io/spring-kafka/docs/current/reference/html/#transactions

Quarkus / Smallrye Reactive Messaging - message redelivery

I'm currently investigating the Smallrye Reactive Messaging integration in Quarkus.Sending and receiving messages is really simple and elegant at first glance.
But one thing which I didn't find out is: How to handle a re-delivery of messages?
Example: We receive a message and try to process it. Some exception (maybe a DB not available or an optimistic lock exception or something) happens.
In such a case I would throw an exception so that the message is not acknowledged. But currently I see no way how the message is redelivered.
I set-up a small dummy project to test this:
Quarkus
ActiveMQ Artemis
send a message (via Artemis console) into a queue
-- queue configured with max redelivery = 3
receive the message with Quarkus / Smallrye Reactive Messaging #Incoming annotation
throw exception in the #Incoming method
--> Message is removed from the Artemis queue
--> #Incoming method is only called once
If I shutdown the Quarkus App, the message can be seen again in the Artemis queue with redelivered flag set to true.
But I find no way how I can manage/configure a redelivery in the Smallrye Reactive Messaging so that this layer handles the redelivery of a message for n times and puts the message into a DLQ after the max retries.
Is there any way to do this?

JMS with Spring Integration or Spring Batch

Our project is to integrate two applications, using the REST API of each and using JMS (to provide asynchronous nature). Application-1 writes the message on the queue. The next step is to read the message from the queue, process it, and send it to application2.
I have two questions:
Should we use one more queue for storing messages after processing and before sending them to application2?
Should we use spring batch or spring integration to read/process the data?
Or you don't show the whole premise, or you really try to overhead your app. If there is just need to read messages from the queue, there is just enough to use Spring JMS directly... From other side with the Spring Integration and its Adapters power you can just process messes from the <int-jms:message-driven-channel-adapter> to the <int-http:outbound-channel-adapter>.
Don't see reason to store message somewhere else in the reading and sending process. Just because with some exception here you just rollback your message to the JMS queue back.

Distributed transactions in Spring: JMS+JPA

How to synchronize JMS and JPA transactions with Spring without JTA?
Ideally, both JPA and JMS should roll back if a failure (exception) happens before successful return from JMS transaction commit.
The case is for receiving of messages by a DefaultMessageListenerContainer. Message handling code stores to JPA repo.
Is it enough to have DMLC to use a JPA transaction manager?
Or should TransactionAwareConnectionFactoryProxy with "synchedLocalTransactionAllowed"="true" be used to
wrap JMS connection factory?
That is how it was done in best-jms-db pattern described here
I run the test from the page above, and they works well. However, I modified them.
They use DMLC with reference to JPA transaction manager, and there is no TransactionAwareConnectionFactoryProxy. That works even better!?!
Both JPA and JMS transactions are rolled back after simulated JMS infra failure.
How can that be possible??
Test code can be found here

Resources