How to rollback transaction invoked with jpa entity listeners - spring

I'm using jpa , spring data and entity listeners to audit my entities precisely on postUpdate , postPersist , PostRemove
This is a pseudo code of my entity listener class
public class EntityListener extends AuditingEntityListener {
#PostUpdate
public void postPersist(Object auditedEntity) {
writer.saveEntity(auditedEntity,"UPDATE");
}
This the pseudo code of the Writer class
public class Writer {
#Async
public void saveEntity(Object auditedEntity, String action) {
try {
//some code to prepare the history entity
historyDAO.save(entity);
} catch (Exception e) {
}
}
when an exception is thrown in Writer class , the auditedEntity is updated or inserted however the historyEntity where i store the audit action doesnt
The problem is i need to invoke the saveEntity method in another thread for performance issue (#Async) but in that case a new transaction is open instead of the previously one which opened
how can i solve the rollack issue for both transactions
so when an exception is throwen both historyEntity and auditedEntity not persisted

I understand that you want to rollback both the child and the parent transaction when an exception is thrown from within Writer.saveEntity.
The problem is that the thread with the original transaction would still need to wait for all these complicated operations to finish before it could mark the transaction as committed. You can't easily span a transaction across multiple threads, either.
The only thing you could probably do to speed things up is you could run the logic of generating the history entities in parallel, and then save them all just before the transaction commits.
One way of doing that that I can think of is using a Hibernate interceptor:
public class AuditInterceptor extends EmptyInterceptor {
private List<Callable<BaseEntity>> historyEntries;
private ExecutorService executor;
...
public void beforeTransactionCompletion(Transaction tx) {
List<Future<BaseEntity>> futures = executor.invokeAll(historyEntries);
if (executor.awaitTermination(/* some timeout here */)) {
futures.stream().map(Future::get).forEach(entity -> session.save(object));
} else {
/* rollback */
}
}
}
Your listener code then becomes:
#PostUpdate
public void postPersist(Object auditedEntity) {
interceptor.getHistoryEntries().add(new Callable<BaseEntity> {
/* history entry generation logic goes here */
});
}
(note that the above code is greatly simplified, you could use any other asynchronous execution API, the basic idea is that you need to block in AuditInterceptor.beforeTransactionCompletion, waiting for all the history entries to be generated)
However, I would strongly advise against using the above technique, as it is rather complicated and error prone.
If you look here: https://docs.jboss.org/hibernate/orm/5.1/userguide/html_single/chapters/events/Events.html, you'll find that Hibernate interceptors have more interesting methods that could help you gather auditing info, and that perhaps your implementation could make use of them, possibly avoiding the need for complicated logic altogether (Hibernate already does track changes to fields of individual entities, so you get that information for free).
Why reinvent the wheel, though? If you dig even deeper, you'll find the Hibernate Envers module (http://hibernate.org/orm/envers/, works for both JPA and pure Hibernate) which gives you business auditing out of the box. Envers already digs into the above mechanism, so hopefully the performance issue would go away.
Final note: have you measured how long history entry generation takes? I would guess that executing for loops and if statements might be cheaper than database access operations. If I were you, I wouldn't do any of the above unless I was absolutely sure that's where the performance bottleneck was.

Related

Repository is not saving data onError method, while saving onWrite method of Listener

I have a simple listener with 3 methods. and a repository with autowired on that. While saving an object from afterWrite it works nicely. but when saving item from onError methods no exception occurs, however it is not saving any data. Thankful for suggestions.
public class WriteListener implements ItemWriteListener{
public void beforeWrite(List items) {
System.out.println("Going to write following items: "+ items.toString());
}
public void onWriteError(Exception exception, List items) {
System.out.println("Error occurred when writing items!");
testRepository.save(items.get(0)); //not working
}
public void afterWrite(List items) {
testRepository.save(items.get(0)); //not nicely and save data
Based on the limited information provided, most likely the cause is the exception itself. The exception would have marked current transaction as dirty thus spring would have rolled it back.
If you still want to store data in your listener despite existing exception, use it in a separate transaction context. Simplest way for that would be to use #Async annotation on your listener and marking it Transactional explicitly to ensure it initiate a new transaction. Check out Spring Event which covers this topic in little bit more depth.

Deleting a record then selecting within the same Spring Transaction still returns the deleted record

I have some code within a spring transaction with the isolation level set to SERIALIZABLE. This code does a few things firstly it deletes all records from a table that have a flag set, next it performs a select to ensure invalid records can not be written and finally the new records are written.
The problem is that the select continues to return the records that were deleted if the code is run with the transaction annotation. My understanding is that because we are performing these operations within the same spring transaction that the previous delete operation will be considered when performing the select.
We are using Spring Boot 2.1 and Hibernate 5.2
A summary of the code is shown below:
#HystrixCommand
public void deleteRecord(EntityObj entityObj) {
fooRepository.deleteById(entityObj.getId());
//Below line added as part of debugging but I don't think I should really need it?
fooRepository.flush();
}
public List<EntityObj> findRecordByProperty(final String property) {
return fooRepository.findEntityObjByProperty(property);
}
#Transactional(isolation = Isolation.SERIALIZABLE)
public void debugReadWrite() {
EntitiyObject entitiyObject = new EntityObject();
entitiyObject.setId(1);
deleteRecord(entitiyObject);
List<EntityObj> results = findRecordByProperty("bar");
if (!results.isEmpty()) {
throw new RuntimeException("Should be no results!")
}
}
The transaction has not committed yet, you need to complete the transaction and then find the record.
decorating the deleteRecord with propagation = Propagation.REQUIRES_NEW) should solve the issue
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void deleteRecord(EntityObj entityObj) {
fooRepository.deleteById(entityObj.getId());
// flush not needed fooRepository.flush();
}
A flush is not needed because when deleteRecord completes the translation will be committed.
under the hood
//start transaction
public void deleteRecord(EntityObj entityObj) {
fooRepository.deleteById(entityObj.getId());
}
//commit transaction
Turns out the issue was due to our use of Hystrix. The transaction is started outside of Hystirx and then at a later point goes through a Hystrix command. The Hystrix command is using a threadpool and so the transaction is lost while executing on the new thread from the Hystrix threadpool. See this github issue for more info:
https://github.com/spring-cloud/spring-cloud-netflix/issues/1381

Spring Data Solr #Transaction Commits

I currently have a setup where data is inserted into a database, as well as indexed into Solr. These two steps are wrapped in a spring-managed transaction via the #Transaction annotation. What I've noticed is that spring-data-solr issues an update with the following parameters whenever the transaction is closed : params{commit=true&softCommit=false&waitSearcher=true}
#Transactional
public void save(Object toSave){
dbRepository.save(toSave);
solrRepository.save(toSave);
}
The rate of commits into solr is fairly high, so ideally I'd like send data to the solr index, and have solr auto commit at regular intervals. I have the autoCommit (and autoSoftCommit) set in my solrconfig.xml, but since spring-data-solr is sending those commit parameters, it does a hard commit every time.
I'm aware that I can drop down to the SolrTemplate API and issue commits manually, I would like to keep the solr repository.save call within a spring-managed transaction if possible. Is there a way to modify the parameters that are sent to solr on commit?
After putting in an IDE debug breakpoint in org.springframework.data.solr.repository.support.SimpleSolrRepository here:
private void commitIfTransactionSynchronisationIsInactive() {
if (!TransactionSynchronizationManager.isSynchronizationActive()) {
this.solrOperations.commit(solrCollectionName);
}
}
I discovered that wrapping my code as #Transactional (and other details to actually enable the framework to begin/end code as a transaction) doesn't achieve what we expect with "Spring Data for Apache Solr". The stacktrace shows the Proxy and Transaction Interceptor classes for our code's Transactional scope but then it also shows the framework starting its own nested transaction with another Proxy and Transaction Interceptor of its own. When the framework exits its CrudRepository.save() method my code calls, the action to commit to Solr is done by the framework's nested transaction. It happens before our outer transaction is exited. So, the attempt to batch-process many saves with one commit at the end instead of one commit for every save is futile. It seems, for this area in my code, I'll have to make use of SolrJ to save (update) my entities to Solr and then have "my" transaction's exit be followed with a commit.
If using Spring Solr, I found using the SolrTemplate bean allows you to 'batch' updates when adding data to the Solr index. By using the bean for SolrTemplate, you can use "addBeans" method, which will add a collection to the index and not commit until the end of the transaction. In my case, I started out using solrClient.add() and taking up to 4 hours for my collection to get saved to the index by iterating over it, as it commits after every single save. By using solrTemplate.addBeans(Collect<?>), it finishes in just over 1 second, as the commit is on the entire collection. Here is a code snippet:
#Resource
SolrTemplate solrTemplate;
public void doReindexing(List<Image> images) {
if (images != null) {
/* CMSSolrImage is a class with #SolrDocument mappings.
* the List<Image> images is a collection pulled from my database
* I want indexed in Solr.
*/
List<CMSSolrImage> sImages = new ArrayList<CMSSolrImage>();
for (Image image : images) {
CMSSolrImage sImage = new CMSSolrImage(image);
sImages.add(sImage);
}
solrTemplate.saveBeans(sImages);
}
}
The way I've done something similar is to create a custom repository implementation of the save methods.
Interface for the repository:
public interface FooRepository extends SolrCrudRepository<Foo, String>, FooRepositoryCustom {
}
Interface for the custom overrides:
public interface FooRepositoryCustom {
public Foo save(Foo entity);
public Iterable<Foo> save(Iterable<Foo> entities);
}
Implementation of the custom overrides:
public class FooRepositoryImpl {
private SolrOperations solrOperations;
public SolrSampleRepositoryImpl(SolrOperations fooSolrOperations) {
this.solrOperations = fooSolrOperations;
}
#Override
public Foo save(Foo entity) {
Assert.notNull(entity, "Cannot save 'null' entity.");
registerTransactionSynchronisationIfSynchronisationActive();
this.solrOperations.saveBean(entity, 1000);
commitIfTransactionSynchronisationIsInactive();
return entity;
}
#Override
public Iterable<Foo> save(Iterable<Foo> entities) {
Assert.notNull(entities, "Cannot insert 'null' as a List.");
if (!(entities instanceof Collection<?>)) {
throw new InvalidDataAccessApiUsageException("Entities have to be inside a collection");
}
registerTransactionSynchronisationIfSynchronisationActive();
this.solrOperations.saveBeans((Collection<? extends T>) entities, 1000);
commitIfTransactionSynchronisationIsInactive();
return entities;
}
private void registerTransactionSynchronisationIfSynchronisationActive() {
if (TransactionSynchronizationManager.isSynchronizationActive()) {
registerTransactionSynchronisationAdapter();
}
}
private void registerTransactionSynchronisationAdapter() {
TransactionSynchronizationManager.registerSynchronization(SolrTransactionSynchronizationAdapterBuilder
.forOperations(this.solrOperations).withDefaultBehaviour());
}
private void commitIfTransactionSynchronisationIsInactive() {
if (!TransactionSynchronizationManager.isSynchronizationActive()) {
this.solrOperations.commit();
}
}
}
and you also need to provide a SolrOperations bean for the right solr core:
#Configuration
public class FooSolrConfig {
#Bean
public SolrOperations getFooSolrOperations(SolrClient solrClient) {
return new SolrTemplate(solrClient, "foo");
}
}
Footnote: auto commit is (to my mind) conceptually incompatible with a transaction. An auto commit is a promise from solr that it will try to start to write it to disk within a certain time limit. Many things might stop that from actually happening however - a timely power or hardware failure, errors between the document and the schema, etc. But the client won't know that solr failed to keep its promise, and the transaction will see a success when it actually failed.

Transaction rollback and save info

In the service layer, I have some method who have a transactional annotation.
#Transactional
public void process() throws ProcessPaymentException{
try{
.... do some operation
catch (ProcessPaymentException ppe) {
save db problem issue.
}
}
It seem like if there are a issue, there are roll back... and nothing is saved in the db...
ProcessPaymentException extend Exception
Is there a way to rollback the process in the try but do the save in the catch?
Edit
Nested transaction could be a solution if this link is ok
https://www.credera.com/blog/technology-insights/java/common-oversights-utilizing-nested-transactions-spring/
Existing answer of using ControllerAdvise should help in normal setup that incoming requests are coming through Spring MVC (i.e. through a Controller).
For cases that is not, or you do not want to tie your exception handling logic to Spring MVC, here are some alternatives I can think of
(Here I assume you want to rely on declarative transaction control instead of programmatically controlling transactions yourself)
Separate service/component to save error in different transaction.
In short, you can have a separate service, which create its own transaction by propagation REQUIRES_NEW. e.g.
#Service
public class FooService
#Inject
private ErrorAuditService errorAuditService;
#Transactional
public void process() throws ProcessPaymentException{
try{
.... do some operation
catch (ProcessPaymentException ppe) {
errorAuditService.saveErrorAudit(ppe.getErrorText());
throw ppe; // I guess you want to re-throw the exception
}
}
}
#Service
public class ErrorAuditService
#Transactional(propagation=REQUIRES_NEW)
public void saveErrorAudit() {
// save to DB
}
}
One step further, if the error handling it the same for different services, you may create an advise, which will be called when service method throws exception. In that advise, you can save the error in db (using ErrorAuditService), and rethrow the exception.
Because processes of try-catch are wrapped by the same transaction.
The transaction manager do rollback whenever an exception is thrown. So, not thing would be saved.
Is there a way to rollback the process in the try but do the save in the catch?
Yes. Create Exception Handler to save db problem issue after rollback.
this is the idea
#ControllerAdvice
public class HandlerName {
#ExceptionHandler(ProcessPaymentException.class)
public void saveDbIssue(ProcessPaymentException ex) {
// save db problem issue.
}
But it only works if u want to save static data.

ActionFilter for Nhibernate Transaction Management is this an ok way to go

I have the following wrapper:
public interface ITransactionScopeWrapper : IDisposable
{
void Complete();
}
public class TransactionScopeWrapper : ITransactionScopeWrapper
{
private readonly TransactionScope _scope;
private readonly ISession _session;
private readonly ITransaction _transaction;
public TransactionScopeWrapper(ISession session)
{
_session = session;
_scope = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions {IsolationLevel = IsolationLevel.ReadCommitted});
_transaction = session.BeginTransaction();
}
#region ITransactionScopeWrapper Members
public void Dispose()
{
try
{
_transaction.Dispose();
}
finally
{
_scope.Dispose();
}
}
public void Complete()
{
_session.Flush();
_transaction.Commit();
_scope.Complete();
}
#endregion
}
In my ActionFilter I have the following:
public class NhibernateTransactionAttribute : ActionFilterAttribute
{
public ITransactionScopeWrapper TransactionScopeWrapper { get; set; }
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
}
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
TransactionScopeWrapper.Complete();
base.OnActionExecuted(filterContext);
}
}
I am using Castle to manage my ISession using a lifestyle of per web request:
container.Register(
Component.For<ISessionFactory>().UsingFactoryMethod(
x => x.Resolve<INHibernateInit>().GetConfiguration().BuildSessionFactory()).LifeStyle.Is(
LifestyleType.Singleton));
container.Register(
Component.For<ISession>().UsingFactoryMethod(x => container.Resolve<ISessionFactory>().OpenSession()).
LifeStyle.Is(LifestyleType.PerWebRequest));
container.Register(
Component.For<ITransactionScopeWrapper>().ImplementedBy<TransactionScopeWrapper>().LifeStyle.Is(
LifestyleType.PerWebRequest));
So now on to my questions.
Any issues with managing the transaction this way
Does an ActionFilter OnActionExecuting and OnActionExecuted methods use the same thread.
I ask number 2 because BeginRequest and EndRequest are not guaranteed to operate on the same thread and if you toss transactions on them you will run into big problems.
In my ActionFilter TransactionScopeWrapper is property injected.
There are some other aspects you should also look into.
First I would say is to decide where to dispose of your transaction. Be aware that if you use lazy loading and pass a data entity back to your view and access a property or reference that is configured to be lazy loaded, you'll encounter problems because your transaction has already been closed in your OnActionExecuted. Though as much as I know you should only use viewmodels in your views, sometimes an entity is a little more convenient. Regardless of the reason if you do want to use lazy loading and access them in your views you'll have to move your transaction completion into the OnResultExecuted method so that it doesn't get prematurely committed.
Second you should also look into checking if there were any exceptions or model errors before committing your transaction. I ended up using inspiration from here and here for my final Filter for dealing with my nHibernate Transaction.
Third, if you decide to dispose of your transaction in the OnResultExecuted handler that you do not do so if it's a request for a child actions. The reason being that like you I scoped my session to the web request, but I found that child actions don't count as a new request and when they are called and they try to open their own session they were getting the already open session context instead. When the child action then completed it was trying to close ITS session but was actually closing the session used by the parent view as well. This caused any logic after the child action that relied on lazy loaded data to fail as well.
I'd like to go through and try to remove my lazy loaded data from my app when it comes to views but until I get the time to do so you should be aware of these issues that may come up.
I was going to post my own action filter when I realized I had some DRY issues I needed to fix. suffice to say I am checking that filterContext.Exception and filterContext.ExceptionHandled to see if there were any errors and if they have been handled already. Note that just because an exception was handled doesn't mean that your transaction is OK to be committed. And though this is more subjective to how your app is coded you may also want to check filterContext.Controller.ViewData.ModelState.IsValid before your commit your transaction as well.
UPDATE: Unlike you, I'm using StructureMap, not Castle for Dependency Injection but in my case I added this line to my Application_EndRequest method in the gobal.asax file as a final bit of cleanup. I'm assuming there is something similar in Castle?
StructureMap.ObjectFactory.ReleaseAndDisposeAllHttpScopedObjects();
UPDATE 2: Anyway, a more direct answer to your question. I don't see anything wrong with using a wrapper like you opt'd to, though I am not sure why you feel the need to wrap it? nHibernate does a really good job of handling the transaction itself so another abstraction layer around that seems unneeded to me. You could just as easily explicitly start the transaction in your OnActionExecuting and explicitly complete it in the OnActionExecuted. By retrieving the ISession object through the DependencyResolver you eliminate any concerns you may have with threading as the IoC container is thread-safe I believe, and from there you can get your current transaction using Session.Transaction and check it's current state from the IsActive property. My understanding is that it's possible for the two methods to occur on different threads though, particularly when dealing with an action on a class inheriting from AsynController.
I've got a problem with a such method. What it do if you use "#Html.Action("TestMethod", "TestController")" ?
As for me I prefer to use explicit transaction call:
using (var tx = session.BeginTransaction())
{
// perform your insert here
tx.Commit();
}
What's about threadsafe, I'd like to know too.

Resources