Message are not commited (loss) when using #TransactionalEventListener to send a message in a JPA Transaction - spring

Background of the code:
In order to replicate a production scenario, I have created a dummy app that will basically save something in DB in a transaction, and in the same method, it publishEvent and publishEvent send a message to rabbitMQ.
Classes and usages
Transaction Starts from this method.:
#Override
#Transactional
public EmpDTO createEmployeeInTrans(EmpDTO empDto) {
return createEmployee(empDto);
}
This method saves the record in DB and also triggers publishEvent
#Override
public EmpDTO createEmployee(EmpDTO empDTO) {
EmpEntity empEntity = new EmpEntity();
BeanUtils.copyProperties(empDTO, empEntity);
System.out.println("<< In Transaction : "+TransactionSynchronizationManager.getCurrentTransactionName()+" >> Saving data for employee " + empDTO.getEmpCode());
// Record data into a database
empEntity = empRepository.save(empEntity);
// Sending event , this will send the message.
eventPublisher.publishEvent(new ActivityEvent(empDTO));
return createResponse(empDTO, empEntity);
}
This is ActivityEvent
import org.springframework.context.ApplicationEvent;
import com.kuldeep.rabbitMQProducer.dto.EmpDTO;
public class ActivityEvent extends ApplicationEvent {
public ActivityEvent(EmpDTO source) {
super(source);
}
}
And this is TransactionalEventListener for the above Event.
//#Transactional(propagation = Propagation.REQUIRES_NEW)
#TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT)
public void onActivitySave(ActivityEvent activityEvent) {
System.out.println("Activity got event ... Sending message .. ");
kRabbitTemplate.convertAndSend(exchange, routingkey, empDTO);
}
This is kRabbitTemplate is a bean config like this :
#Bean
public RabbitTemplate kRabbitTemplate(ConnectionFactory connectionFactory) {
final RabbitTemplate kRabbitTemplate = new RabbitTemplate(connectionFactory);
kRabbitTemplate.setChannelTransacted(true);
kRabbitTemplate.setMessageConverter(kJsonMessageConverter());
return kRabbitTemplate;
}
Problem Definition
When I am saving a record and sending a message on rabbitMQ using the above code flow, My messages are not delivered on the server means they lost.
What I understand about the transaction in AMQP is :
If the template is transacted, but convertAndSend is not called from Spring/JPA Transaction then messages are committed within the template's convertAndSend method.
// this is a snippet from org.springframework.amqp.rabbit.core.RabbitTemplate.doSend()
if (isChannelLocallyTransacted(channel)) {
// Transacted channel created by this template -> commit.
RabbitUtils.commitIfNecessary(channel);
}
But if the template is transacted and convertAndSend is called from Spring/JPA Transaction then this isChannelLocallyTransacted in doSend method will evaluate false and commit will be done in the method which initiated Spring/JPA Transaction.
What I found after investigating the reason for message loss in my above code.
Spring transaction was active when I called convertAndSend method, so it was supposed to commit the message in Spring transaction.
For that, RabbitTemplate binds the resources and registers the Synchronizations before sending the message in bindResourceToTransaction of org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.
public static RabbitResourceHolder bindResourceToTransaction(RabbitResourceHolder resourceHolder,
ConnectionFactory connectionFactory, boolean synched) {
if (TransactionSynchronizationManager.hasResource(connectionFactory)
|| !TransactionSynchronizationManager.isActualTransactionActive() || !synched) {
return (RabbitResourceHolder) TransactionSynchronizationManager.getResource(connectionFactory); // NOSONAR never null
}
TransactionSynchronizationManager.bindResource(connectionFactory, resourceHolder);
resourceHolder.setSynchronizedWithTransaction(true);
if (TransactionSynchronizationManager.isSynchronizationActive()) {
TransactionSynchronizationManager.registerSynchronization(new RabbitResourceSynchronization(resourceHolder,
connectionFactory));
}
return resourceHolder;
}
In my code, after resource bind, it is not able to registerSynchronization because TransactionSynchronizationManager.isSynchronizationActive()==false. and since it fails to registerSynchronization, spring commit did not happen for the rabbitMQ message as AbstractPlatformTransactionManager.triggerAfterCompletion calls RabbitMQ's commit for each synchronization.
What problem I faced because of the above issue.
Message was not committed in the spring transaction, so the message lost.
As resource was added in bindResourceToTransaction, this resource remained bind and did not let add the resource for any other message to send in the same thread.
Possible Root Cause of TransactionSynchronizationManager.isSynchronizationActive()==false
I found the method which starts the transaction removed the synchronization in triggerAfterCompletion of org.springframework.transaction.support.AbstractPlatformTransactionManager class. because status.isNewSynchronization() evaluated true after DB opertation (this usually not happens if I call convertAndSend without ApplicationEvent).
private void triggerAfterCompletion(DefaultTransactionStatus status, int completionStatus) {
if (status.isNewSynchronization()) {
List<TransactionSynchronization> synchronizations = TransactionSynchronizationManager.getSynchronizations();
TransactionSynchronizationManager.clearSynchronization();
if (!status.hasTransaction() || status.isNewTransaction()) {
if (status.isDebug()) {
logger.trace("Triggering afterCompletion synchronization");
}
// No transaction or new transaction for the current scope ->
// invoke the afterCompletion callbacks immediately
invokeAfterCompletion(synchronizations, completionStatus);
}
else if (!synchronizations.isEmpty()) {
// Existing transaction that we participate in, controlled outside
// of the scope of this Spring transaction manager -> try to register
// an afterCompletion callback with the existing (JTA) transaction.
registerAfterCompletionWithExistingTransaction(status.getTransaction(), synchronizations);
}
}
}
What I Did to overcome on this issue
I simply added #Transactional(propagation = Propagation.REQUIRES_NEW) along with on #TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT) in onActivitySave method and it worked as a new transaction was started.
What I need to know
Why this status.isNewSynchronization in triggerAfterCompletion method when using ApplicationEvent?
If the transaction was supposed to terminate in the parent method, why I got TransactionSynchronizationManager.isActualTransactionActive()==true in Listner class?
If Actual Transaction Active, was it supposed to remove the synchronization?
In bindResourceToTransaction, do spring AMQP assumed an active transaction without synchronization? if the answer is yes, why not to synchronization. init if it is not activated?
If I am propagating a new transaction then I am losing the parent transaction, is there any better way to do it?
Please help me on this, it is a hot production issue, and I am not very sure about the fix I have done.

This is a bug; the RabbitMQ transaction code pre-dated the #TransactionalEventListener code, by many years.
The problem is, with this configuration, we are in a quasi-transactional state, while there is indeed a transaction in process, the synchronizations are already cleared because the transaction has already committed.
Using #TransactionalEventListener(phase = TransactionPhase.BEFORE_COMMIT) works.
I see you already raised an issue:
https://github.com/spring-projects/spring-amqp/issues/1309
In future, it's best to ask questions here, or raise an issue if you feel there is a bug. Don't do both.

Related

Is 1 phase commit (ChainedTransactionManager) really necessary in this scenario vs no transaction management?

I have a Spring Boot application with a #JmsListener that receives a message from a queue, stores it in database and sends it to another queue.
I wanted to have a minimal transactional guarantee so 1-phase-commit works for me. After a lot of reading I found I could use the ChainedTransactionManager to coordinate the DataSource and JMS resources:
#Configuration
public class TransactionConfiguration {
#Bean
public ChainedTransactionManager transactionManager(JpaTransactionManager jpaTm, JmsTransactionManager jmsTm) {
return new ChainedTransactionManager(jmsTm, jpaTm);
}
#Bean
public JpaTransactionManager jpaTransactionManager(EntityManagerFactory emf) {
return new JpaTransactionManager(emf);
}
#Bean
public JmsTransactionManager jmsTransactionManager(ConnectionFactory connectionFactory) {
return new JmsTransactionManager(connectionFactory);
}
}
Queue listener:
#Transactional(transactionManager = "transactionManager")
#JmsListener(...)
public void process(#Payload String message) {
//Write to db
//Send to output queue
}
START MESSAGING TX
START DB TX
READ MESSAGE
WRITE DB
SEND MESSAGE
COMMIT DB TX
COMMIT MESSAGING TX
If the db commit fails the message will be reprocesed again
If the db commit succeeds but the messaging commit fails the message will be reprocessed. This it not a problem since I can guarantee the idempotency of the db write operation
Now my doubt is, let's suppose I hadn't configured the ChainedTransactionManager and the listener were like this (no #Transactional):
#JmsListener(...)
public void process(#Payload String message) {
//Write to db
//Send to output queue
}
Doesn't this behave the same as the other example despite not coordinating the commits? (I've verified that on SQL exceptions the message is redelivered)
RECEIVE MESSAGE
WRITE DB + COMMIT
SEND MESSAGE + COMMIT
If the DB commit failed the message would be reprocessed
If it succeeded and the send message operation failed it would be reprocessed again.
So is the ChainedTransactionManager really necessary in this case?
UPDATE: Debugging the Spring Boot autoconfiguration (JmsAnnotationDrivenConfiguration)...
#Bean
#ConditionalOnMissingBean(name = "jmsListenerContainerFactory")
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(
DefaultJmsListenerContainerFactoryConfigurer configurer, ConnectionFactory connectionFactory) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
configurer.configure(factory, connectionFactory);
return factory;
}
... the DefaultJmsListenerContainerFactoryConfigurer is configuring the factory with factory.setSessionTransacted(true); because there is no JtaTransactionManager defined:
if (this.transactionManager != null) {
factory.setTransactionManager(this.transactionManager);
}
else {
factory.setSessionTransacted(true);
}
With setSessionTransacted(true), according to the Spring doc I would get the message rollback and redelivery behaviour I need on DB (or any) exceptions:
Local resource transactions can simply be activated through the
sessionTransacted flag on the listener container definition. Each
message listener invocation will then operate within an active JMS
transaction, with message reception rolled back in case of listener
execution failure. Sending a response message (via
SessionAwareMessageListener) will be part of the same local
transaction, but any other resource operations (such as database
access) will operate independently. This usually requires duplicate
message detection in the listener implementation, covering the case
where database processing has committed but message processing failed
to commit.
That explains I'm getting the behaviour I expect without needing to configure the ChainedTransactionManager.
After all this, could you tell me if it makes sense (it adds some guarantee I'm missing) to use the ChainedTransactionManager in this case?

How to set a Message Handler programmatically in Spring Cloud AWS SQS?

maybe someone has an idea to my following problem:
I am currently on a project, where i want to use the AWS SQS with Spring Cloud integration. For the receiver part i want to provide a API, where a user can register a "message handler" on a queue, which is an interface and will contain the user's business logic, e.g.
MyAwsSqsReceiver receiver = new MyAwsSqsReceiver();
receiver.register("a-queue-name", new MessageHandler(){
#Override
public void handle(String message){
//... business logic for the received message
}
});
I found examples, e.g.
https://codemason.me/2016/03/12/amazon-aws-sqs-with-spring-cloud/
and read the docu
http://cloud.spring.io/spring-cloud-aws/spring-cloud-aws.html#_sqs_support
But the only thing i found there to "connect" a functionality for processing a incoming message is a annotation on a method, e.g. #SqsListener or #MessageMapping.
These annotations are fixed to a certain queue-name, though. So now i am at a loss, how to dynamically "connect" my provided "MessageHandler" (from my API) to the incoming message for the specified queuename.
In the Config the example there is a SimpleMessageListenerContainer, which gets a QueueMessageHandler set, but this QueueMessageHandler does not seem
to be the right place to set my handler or to override its methods and provide my own subclass of QueueMessageHandler.
I already did something like this with the Spring Amqp integration and RabbitMq and thought, that it would be also similar here with AWS SQS.
Does anyone have an idea, how to accomplish this?
thx + bye,
Ximon
EDIT:
I found, that Spring JMS could actually do that, e.g. www.javacodegeeks.com/2016/02/aws-sqs-spring-jms-integration.html. Does anybody know, what consequences using JMS protocol has here, good or bad?
I am facing the same issue.
I am trying to go in an unusual way where I set up an Aws client bean at build time and then instead of using sqslistener annotation to consume from the specific queue I use the scheduled annotation which I can programmatically pool (each 10 secs in my case) from which queue I want to consume.
I did the example that iterates over queues defined in properties and then consumes from each one.
Client Bean:
#Bean
#Primary
public AmazonSQSAsync awsSqsClient() {
return AmazonSQSAsyncClientBuilder
.standard()
.withRegion(Regions.EU_WEST_1.getName())
.build();
}
Consumer:
// injected in the constructor
private final AmazonSQSAsync awsSqsClient;
#Scheduled(fixedDelay = 10000)
public void pool() {
properties.getSqsQueues()
.forEach(queue -> {
val receiveMessageRequest = new ReceiveMessageRequest(queue)
.withWaitTimeSeconds(10)
.withMaxNumberOfMessages(10);
// reading the messages
val result = awsSqsClient.receiveMessage(receiveMessageRequest);
val sqsMessages = result.getMessages();
log.info("Received Message on queue {}: message = {}", queue, sqsMessages.toString());
// deleting the messages
sqsMessages.forEach(message -> {
val deleteMessageRequest = new DeleteMessageRequest(queue, message.getReceiptHandle());
awsSqsClient.deleteMessage(deleteMessageRequest);
});
});
}
Just to clarify, in my case, I need multiple queues, one for each tenant, with the queue URL for each one passed in a property file. Of course, in your case, you could get the queue names from another source, maybe a ThreadLocal which has the queues you have created in runtime.
If you wish, you can also try the JMS approach where you create message consumers and add a listener to each one you wish (See the doc Aws Jms documentation).
When we do Spring and SQS we use the spring-cloud-starter-aws-messaging.
Then just create a Listener class
#Component
public class MyListener {
#SQSListener(value="myqueue")
public void listen(MyMessageType message) {
//process the message
}
}

Stomp over websocket using Spring and sockJS message lost

On the client side javascript I have
stomp.subscribe("/topic/path", function (message) {
console.info("message received");
});
And on the server side
public class Controller {
private final MessageSendingOperations<String> messagingTemplate;
ï¼ Autowired
public Controller(MessageSendingOperations<String> messagingTemplate) {
this.messagingTemplate = messagingTemplate;
}
#SubscribeMapping("/topic/path")
public void subscribe() {
LOGGER.info("before send");
messagingTemplate.convertAndSend(/topic/path, "msg");
}
}
From this setup, I am occasionally (around once in 30 page refreshes) experiencing message dropping, which means I can see neither "message received" msg on the client side nor the websocket traffic from Chrome debugging tool.
"before send" is always logged on the server side.
This looks like that the MessageSendingOperations is not ready when I call it in the subscribe() method. (if I put Thread.sleep(50); before calling messagingTemplate.convertAndSend the problem would disappear (or much less likely to be reproduced))
I wonder if anyone experienced the same before and if there is an event that can tell me MessageSendingOperations is ready or not.
The issue you are facing is laying in the nature of clientInboundChannel which is ExecutorSubscribableChannel by default.
It has 3 subscribers:
0 = {SimpleBrokerMessageHandler#5276} "SimpleBroker[DefaultSubscriptionRegistry[cache[0 destination(s)], registry[0 sessions]]]"
1 = {UserDestinationMessageHandler#5277} "UserDestinationMessageHandler[DefaultUserDestinationResolver[prefix=/user/]]"
2 = {SimpAnnotationMethodMessageHandler#5278} "SimpAnnotationMethodMessageHandler[prefixes=[/app/]]"
which are invoked within taskExecutor, hence asynchronously.
The first one here (SimpleBrokerMessageHandler (or StompBrokerRelayMessageHandler) if you use broker-relay) is responsible to register subscription for the topic.
Your messagingTemplate.convertAndSend(/topic/path, "msg") operation may be performed before the subscription registration for that WebSocket session, because they are performed in the separate threads. Hence the Broker handler doesn't know you to send the message to the session.
The #SubscribeMapping can be configured on method with return, where the result of this method will be sent as a reply to that subscription function on the client.
HTH
Here is my solution. It is along the same lines. Added a ExecutorChannelInterceptor and published a custom SubscriptionSubscribedEvent. The key is to publish the event after the message has been handled by AbstractBrokerMessageHandler which means the subscription has been registered with the broker.
#Override
public void configureClientInboundChannel(ChannelRegistration registration) {
registration.interceptors(new ExecutorChannelInterceptorAdapter() {
#Override
public void afterMessageHandled(Message<?> message, MessageChannel channel, MessageHandler handler, Exception ex) {
SimpMessageHeaderAccessor accessor = SimpMessageHeaderAccessor.wrap(message);
if (accessor.getMessageType() == SimpMessageType.SUBSCRIBE && handler instanceof AbstractBrokerMessageHandler) {
/*
* Publish a new session subscribed event AFTER the client
* has been subscribed to the broker. Before spring was
* publishing the event after receiving the message but not
* necessarily after the subscription occurred. There was a
* race condition because the subscription was being done on
* a separate thread.
*/
applicationEventPublisher.publishEvent(new SessionSubscribedEvent(this, message));
}
}
});
}
A little late but I thought I'd add my solution. I was having the same problem with the subscription not being registered before I was sending data through the messaging template. This issue happened rarely and unpredictable because of the race with the DefaultSubscriptionRegistry.
Unfortunately, I could not just use the return method of the #SubscriptionMapping because we were using a custom object mapper that changed dynamically based on the type of user (attribute filtering essentially).
I searched through the Spring code and found SubscriptionMethodReturnValueHandler was responsible for sending the return value of subscription mappings and had a different messagingTemplate than the autowired SimpMessagingTemplate of my async controller!!
So the solution was autowiring MessageChannel clientOutboundChannel into my async controller and using that to create a SimpMessagingTemplate. (You can't directly wire it in because you'll just get the template going to the broker).
In subscription methods, I then used the direct template while in other methods I used the template that went to the broker.

How can I use Spring Integration to only send a message if my transaction finishes successfully?

I am in the process of learning Spring Integration and using it to implement a basic email service in Grails. What I want to be able to do is call my email service but only have the email be sent if the transaction trying to send the email is successful. Although this is being done in Grails, it really shouldn't be different from a regular Spring app except for using the BeanBuilder DSL instead of the XML configuration.
Anyway, here is my configuration for the channel:
beans = {
xmlns integration:'http://www.springframework.org/schema/integration'
integration.channel(id: 'email')
}
Here is my service:
class MailService {
#ServiceActivator(inputChannel = "email")
MailMessage sendMail(Closure callable) {
//sending mail code
}
}
Now what I expect to happen is that when I inject this MailService into another service and call send mail, that will place a message on the email channel, which will only get published if my transaction completes. What leads me to believe this is the section on UserProcess here: http://docs.spring.io/spring-integration/reference/html/transactions.html, which states that a user started process will have all the transactional properties that Spring provides.
I am attempting to test this with an integration test:
void "test transactionality"() {
when:
assert DomainObject.all.size() == 0
DomainObject.withNewTransaction { status ->
DomainObject object = buildAndSaveNewObject()
objectNotificationService.sendEmails(object) //This service injects emailService and calls sendMail
throw new Exception()
}
then:
thrown(Exception) // This is true
DomainObject.all.size() == 0 // This is true
greenMail.receivedMessages.length == 0 // This fails
}
What this does is create and save an object and send emails all within the same transaction. I then throw an exception to cause that transaction to fail. As expected, none of my domain objects are persisted. However, I still receive emails.
I am quite new to Spring Integration and Spring in general, so it's possible I'm misunderstanding how this is all supposed to work, but I would expect the sendMail message to never be placed on the email channel.
It turns out that I don't think Spring Integration is the best way to achieve "only perform on commit" functionality (but if you do, Gary Russell's answer is the way to go.) You should instead use the TransactionSynchronizationManager provided as part of the Spring transaction management framework.
As an example, I created a TransactionService in grails:
class TransactionService {
def executeAfterCommit(Closure c) {
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronizationAdapter() {
#Override
void afterCommit() {
c.call()
}
})
}
}
You can then inject this anywhere you need it and use it like so:
def transactionService
transactionService.executeAfterCommit {
sendConfirmationEmail()
}
I don't know how this would be done in Grails, but in Java, you could use a transaction synchronization factory whereby you can take different actions depending on success/failure...
<int:transaction-synchronization-factory id="syncFactory">
<int:after-commit expression="payload.renameTo('/success/' + payload.name)" channel="committedChannel" />
<int:after-rollback expression="payload.renameTo('/failed/' + payload.name)" channel="rolledBackChannel" />
</int:transaction-synchronization-factory>
The result of the expression evaluation is sent to the channel, where you can have your outbound mail adapter.

Controlling inner transaction settings from outer transaction with Spring 2.5

I'm using Spring 2.5 transaction management and I have the following set-up:
Bean1
#Transactional(noRollbackFor = { Exception.class })
public void execute() {
try {
bean2.execute();
} catch (Exception e) {
// persist failure in database (so the transaction shouldn't fail)
// the exception is not re-thrown
}
}
Bean2
#Transactional
public void execute() {
// do something which throws a RuntimeException
}
The failure is never persisted into DB from Bean1 because the whole transaction is rolled back.
I don't want to add noRollbackFor in Bean2 because it's used in a lot of places which don't have logic to handle runtime exceptions properly.
Is there a way to avoid my transaction to be rolled back only when Bean2.execute() is called from Bean1?
Otherwise, I guess my best option is to persist my failure within a new transaction? Anything else clean I can do?
This is one of the caveats of annotations... your class is not reusable!
If you'd configure your transactions in the XML, if would have been possible.
Assuming you use XML configuration: if it's not consuming expensive resources, you can create another instance of bean2 for the use of the code you specified. That is, you can configure one been as you specified above, and one with no roll back for exception.

Resources