How to handle transaction involving Spring message/JMS and Database - spring

I have a method that get an invoice and it creates XML and send that XML to a JMS queue and then save the invoice to DB with updated status like 'invoiced'. Below is pseudo code that involves Spring and Hibernate. My question is: Is the failure in hibernate save rollsback Jms sending.or if JMS send failed, how can I roll back saving invoice status? is this comes under distributed transaction management. What are the transactional cases involved here. Thanks.
#Transactional(propagation=Propagation.Required)
void processInvoices(invoice ){
String xml = createXML(invoice);
messageService.sendInvoice(xml );
invoice.setStatus("invoiced");
save(invoice);
}

As per my knowledge and what I understand from your question you want to synchronize hibernate and JMS transaction, For doing this you will need to use JTA to manage transactions across the the both Hibernate and JMS
Read More # Spring synchronising Hibernate and JMS transactions

Related

What precisely means setSessionTransacted in JMSTemplate?

Please explain me if I understood correctly Spring documentation.
Spring docs states: https://docs.spring.io/spring/docs/current/spring-framework-reference/integration.html#jms-tx
(...)When you use the JmsTemplate in an unmanaged environment, you can specify these values (transaction and acknowledgment modes) through the use of the properties sessionTransacted and sessionAcknowledgeMode.
When you use a PlatformTransactionManager with JmsTemplate, the template is always given a transactional JMS Session.(..)
(BTW, that is true - session is transactional)
Javadoc states : https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/jms/core/JmsTemplate.html
Default settings for JMS Sessions are "not transacted" and "auto-acknowledge". As defined by the Java EE specification, the transaction and acknowledgement parameters are ignored when a JMS Session is created inside an active transaction, no matter if a JTA transaction or a Spring-managed transaction.
I understood that if transaction is active, JMS Template session transaction settings are ignored - that is true - and the session should participate active transaction - that is not true.
I debug why it is not true and I found: https://github.com/spring-projects/spring-framework/blame/master/spring-jms/src/main/java/org/springframework/jms/connection/ConnectionFactoryUtils.java#L353
if (resourceHolderToUse != resourceHolder) {
TransactionSynchronizationManager.registerSynchronization(
new JmsResourceSynchronization(resourceHolderToUse, connectionFactory,
resourceFactory.isSynchedLocalTransactionAllowed()));
resourceHolderToUse.setSynchronizedWithTransaction(true);
TransactionSynchronizationManager.bindResource(connectionFactory, resourceHolderToUse);
}
The line resourceHolderToUse.setSynchronizedWithTransaction(true) is align the documentation.
The issue here: resourceFactory.isSynchedLocalTransactionAllowed()
Because resourceFactory is org.springframework.jms.core.JmsTemplate.JmsTemplateResourceFactory#isSynchedLocalTransactionAllowed which points to JmsTemplate#sessionTransacted.
Conclusion:
According to documentation, if transaction is active, JmsTemplate#sessionTransacted should be ignored. But it is not true - although session is transactional, cannot not participate in commit.
JmsTemplate#sessionTransacted is finally mapped to ConnectionFactoryUtils.JmsResourceSynchronization#transacted and default=false prevents commit being called at the end of transaction (JmsResourceSynchronization "thinks" that it does not participate transaction)
Do I understand documentation right and there is really bug here?
Guided by M. Deinum, I made more experiments and it seems I wrongly understood term Spring-managed transaction
I simply thought that Spring managed transaction is started by platformTransactionManager. But:
If platformTransactionManager is JtaTransactionManager and transaction is started, it IS Spring managed transaction; JMS template attribute sessionTransacted is ignored and JMS template is part of transaction
if platformTransactionManager is DataSourceTransactionManager or JpaTransactionManager then
if sessionTransacted is false, JMS template is not in transaction
if sessionTransacted is true, JMS template is synchronized with transaction: after callback/rollback on JDBC/JPA transaction correspondent commit/rollback is called on JMS transaction

The Hibernate session (EntityManager) scope in Spring Batch?

As I’m new to Spring and Spring Batch, I have a general question about Spring Batch and JPA using Hibernate as provider.
Please, I want to know when the Hibernate session (wrapped by the EntityManager) is flushed? Between Reader, Processor and Writer? or for each commit interval? We can control it or not?
Please, I want to know when the Hibernate session (wrapped by the EntityManager) is flushed? Between Reader, Processor and Writer? or for each commit interval?
The session is flushed after writing a chunk of items, at each commit interval. For more details, take a look at:
HibernateItemWriter: https://github.com/spring-projects/spring-batch/blob/master/spring-batch-infrastructure/src/main/java/org/springframework/batch/item/database/HibernateItemWriter.java#L95
JpaItemWriter: https://github.com/spring-projects/spring-batch/blob/master/spring-batch-infrastructure/src/main/java/org/springframework/batch/item/database/JpaItemWriter.java#L84
We can control it or not?
If you use the HibernateItemWriter, you can set the clearSession flag to clear the session after each chunk.
To the best of my knowledge when the Spring transaction is committed which would be after each chunk.

How to turn off JPA for SpringBatch under SpringBoot

We have a Spring Boot application that uses Spring Integration and Spring Batch. We drop a file in the poller and it processes. This process inserts records into a database and then reads them back out does some processing and writes a file. Let's say there are 10 records. The first time we get 10 records read and 10 written. Without stopping the server, we delete all the records through a SQL client on the database, run the same file again and we get 10 records read with 20 written. I believe there is some JPA or caching going on with the datasource. We've tried turning off several auto configuration options for JPA and caching but we haven't found the right configuration option to turn off caching.
Adding a bit more detail to the question.
Basically we have cron scheduler that has a FileHandler. This the handleFile methods we have the following.
public File handleFile(File file) throws Throwable {
JobParametersBuilder jobParametersBuilder = new JobParametersBuilder();
Job job = (Job) appContext.getBean("processInitialFileJob");
JobExecution jb = jobLauncher.run(job, jobParametersBuilder.toJobParameters());
....
}
What can we do to the code above to ensure that it has a new JPA session or not use the JPA session at all? This job needs to read from the database each time and not a cached representation of the database.
Are u using Hibernate. Hibernate First Level cache may be creating the problem for u. Hibernate manages a First Level cache which is local to your Session. So once u create a session and do any transactions in that hibernate syncs that within. But when u do any changes to the table outside hibernate then hibernate wont sync that until flush is called on the session and session is closed.
To make sure this is not happening, inside your poller logic try creating new Session(or EntityManager in case of JPA) and close the session for every read/process/write cycle.
Also make sure this hibernate.current_session_context_class is not set to Thread. Since thread can be reused by the poller so the same Hibernate Session may be injected again.
This ended up not being an issue with Hibernate or JPA, but an issue of a StringBuilder holding on to data from previous runs. I believe this will need to be setup as #JobScope so that it is not reused across different executions of the job.

Spring:: If a fork a new thread will it be enforced in transaction by Spring

We are using declarative spring transaction attribute for database integrity. Some of our code call webservice which do bunch of stuffs in sharepoint. The problem is when webservices take longer time users get deadlock from spring which is holding up backend.
If I make a new thread inside a function which has spring transaction declarative attribute will that be ignored from spring?
[Transaction(TransactionPropagation.Required, ReadOnly = false)]
public void UploadPDFManual(/*parameters*/)
{
//DO some data base related things
if (revisionPDFBytes != null)
{
//my sharepoint call which calls webservice
Task.Factory.StartNew(() => DocumentRepositoryUtil.CreateSharepointDocument(docInfo)); // I draw a new thread from ASPNET worker thread pool.
}
}
Anything other options I should go for?
You don't need doing it in a transaction. Transaction makes a database save an object properly. That's it. All other stuff must be done after the transaction commit. In Java, you can make it with Spring's transaction synchronization or JMS. Take a look at the accepted answer over here.
More useful info specific for .NET (see 17.8).

XA transactions and message bus

In our new project we would like to achieve transactions that involve jpa (mysql) and a message bus (rabbitmq)
We started building our infrastructure with spring data using mysql and rabbitmq (via spring amqp module). Since rabbitMq is not XA-transactional we configured the neo4j chainedTransactionManager as our main transactionManager. This manager takes as argument the jpa txManager and the rabbitTransactionManager.
Now, I do get the ability to annotate a service with #Transacitonal and use both the jpa and rabbit inside it. If I throw an exception within the service then none of the actions actually occur.
Here are my questions:
Is this configuration really gives me an atomic transaction?
I've heard that the chained tx manager is not using a 2 phase commit but a "best effort", is this best effort less reliable? if so how?
What the ChainedTransactionManager does is basically start and commit transactions in reverse order. So if you have a JpaTransactionManager and a RabbitTransactionManager and configured it like so.
#Bean
public PlatformTransactionManager transactionManager() {
return new ChainedTransactionManager(rabbitTransactionManager(), jpaTransactionManager());
}
Now if tha JPA commit succeeds but your commit to rabbitMQ fails your database changes will still be persisted as those are already committed.
To answer your first question it doesn't give you a real atomic transaction, everything that has been committed prior to the occurence of the Exception (on committing) will remain committed.
See http://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/transaction/ChainedTransactionManager.html

Resources