Grails service transactional behaviour - spring

In a Grails app, the default behaviour of service methods is that they are transactional and the transaction is automatically rolled-back if an unchecked exception is thrown. However, in Groovy one is not forced to handle (or rethrow) checked exceptions, so there's a risk that if a service method throws a checked exception, the transaction will not be rolled back. On account of this, it seems advisable to annotate every Grails service class
#Transactional(rollbackFor = Throwable.class)
class MyService {
void writeSomething() {
}
}
Assume I have other methods in MyService, one of which only reads the DB, and the other doesn't touch the DB, are the following annotations correct?
#Transactional(readOnly = true)
void readSomething() {}
// Maybe this should be propagation = Propagation.NOT_SUPPORTED instead?
#Transactional(propagation = Propagation.SUPPORTS)
void dontReadOrWrite() {}
In order to answer this question, I guess you'll need to know what my intention is:
If an exception is thrown from any method and there's a transaction in progress, it will be rolled back. For example, if writeSomething() calls dontReadOrWrite(), and an exception is thrown from the latter, the transaction started by the former will be rolled back. I'm assuming that the rollbackFor class-level attribute is inherited by individual methods unless they explicitly override it.
If there's no transaction in progress, one will not be started for methods like dontReadOrWrite
If no transaction is in progress when readSomething() is called, a read-only transaction will be started. If a read-write transaction is in progress, it will participate in this transaction.

Your code is right as far as it goes: you do want to use the Spring #Transactional annotation on individual methods in your service class to get the granularity you're looking for, you're right that you want SUPPORTS for dontReadOrWrite (NOT_SUPPORTED will suspend an existing transaction, which won't buy you anything based on what you've described and will require your software to spend cycles, so there's pain for no gain), and you're right that you want the default propagation behavior (REQUIRED) for readSomething.
But an important thing to keep in mind with Spring transactional behavior is that Spring implements transaction management by wrapping your class in a proxy that does the appropriate transaction setup, invokes your method, and then does the appropriate transaction tear-down when control returns. And (crucially), this transaction-management code is only invoked when you call the method on the proxy, which doesn't happen if writeSomething() directly calls dontReadOrWrite() as in your first bullet.
If you need different transactional behavior on a method that's called by another method, you've got two choices that I know of if you want to keep using Spring's #Transactional annotations for transaction management:
Move the method being called by the other into a different service class, which will be accessed from your original service class via the Spring proxy.
Leave the method where it is. Declare a member variable in your service class to be of the same type as your service class's interface and make it #Autowired, which will give you a reference to your service class's Spring proxy object. Then when you want to invoke your method with the different transactional behavior, do it on that member variable rather than directly, and the Spring transaction code will fire as you want it to.
Approach #1 is great if the two methods really aren't related anyway, because it solves your problem without confusing whoever ends up maintaining your code, and there's no way to accidentally forget to invoke the transaction-enabled method.
Approach #2 is usually the better option, assuming that your methods are all in the same service for a reason and that you wouldn't really want to split them out. But it's confusing to a maintainer who doesn't understand this wrinkle of Spring transactions, and you have to remember to invoke it that way in each place you call it, so there's a price to it. I'm usually willing to pay that price to not splinter my service classes unnaturally, but as always, it'll depend on your situation.

I think that what you're looking for is more granular transaction management, and using the #Transactional annotation is the right direction for that. That said, there is a Grails Transaction Handling Plugin that can give you the behavior that you're looking for. The caveat is that you will need to wrap your service method calls in a DomainClass.withTransaction closure and supply the non-standard behavior that you're looking for as a parameter map to the withTransaction() method.
As a note, on the backend this is doing exactly what you're talking about above by using the #Transactional annotation to change the behavior of the transaction at runtime. The plugin documentation is excellent, so I don't think you'll find yourself without sufficient guidance.
Hope this is what you're looking for.

Related

Why does Hibernate not support nested transactions outside of Spring?

I am using Hibernate4 but not Spring. In the application I am developing I want to log a record of every Add, Update, Delete to a separate log table. As it stands at the moment my code does two transactions in sequence, and it works, but I really want to wrap them up into one transaction.
I know Hibernate does not support nested transactions, only in conjunction with Spring framework. I´ve read about savepoints, but they´re not quite the same thing.
Nothing in the standards regarding JPA and JTA specification has support for nested transactions.
What you most likely mean with support by spring is #Transactional annotations on multiple methods in a call hierarchie. What spring does in that situation is to check is there an ongoing transaction if not start a new one.
You might think that the following situation is a nested transaction.
#Transactional
public void method1(){
method2(); // method in another class
}
#Transactional(propagation=REQUIRES_NEW)
public void method2(){
// do something
}
What happens in realitiy is simplified the following. The type of transactionManager1 and transactionManager2 is javax.transaction.TransactionManager
// call of method1 intercepted by spring
transactionManager1.begin();
// invocation of method1
// call of method 2 intercepted by spring (requires new detected)
transactionManager1.suspend();
transactionManager2.begin();
// invocation of method2
// method2 finished
transactionManager2.commit();
transactionManager1.resume();
// method1 finished
transactionManager1.commit();
In words the one transaction is basically on pause. It is important to understand this. Since the transaction of transactionManager2 might not see changes of transactionManager1 depending on the transaction isolation level.
Maybe a little background why I know this. I've written a prototype of distributed transaction management system, allowing to transparently executed methods in a cloud environment (one method gets executed on instance, the next method might be executed somewhere else).

#Transactional with 1 save statement

Does it make sense mark with spring's #Transactional a method with a single save sentence?
For example:
// does it make sense mark it with #Transactional if it only has 1 save sentence?
#Transactional
public void saveMethod() {
user.save()
}
If you´re using Spring data interface, you dont need use #transactional annotation. Only in case that you want to provide two execution against the database and you want to share the transaction, so then rollback of both actions can be made.
Anyway it is always better use #transactional even as read-only for get queries(setting the FlushMode to MANUAL to let persistence providers potentially skip dirty checks when closing the EntityManager), so I would suggest put the #transactional as Service layer and read-only for get queries.
Yes because any "modifications" on the database requires a transaction (to commit the changes). You might think otherwise because of auto-commit, but in the end, there is still a transaction there somewhere.
But it is always a good thing to explicitly define the boundaries of your transaction (when transaction is started and when it is committed).

Perform JSR 303 validation in transactional service call

I am using the play framework to develop a web application which accesses a postgres db using JOOQ and spring transactions.
Currently I am implementing the user signup which is structured in the following way:
The user posts the signup form
The request is routed to a controller action which maps all parameters like e-mail, password etc. on a DTO. The different fields of the DTO are annotated with JSR 303 constraints.
The e-mail field is annotated with a constraint validator that makes sure that the same address is not added twice. This validator has an autowired reference to the UserRepository, so that it can invoke it's isExistingEmail method.
The signup method of the user service is called, which basically looks as follows:
#Transactional(isolation = Isolation.SERIALIZABLE)
public User signupUser(UserDto userDto) {
validator.validate(userDto);
userRepository.add(userDto);
return tutor;
}
In case of a validation error the validator.validate(userDto) call inside of the service will throw a RuntimeException.
Please note that the repository's add method is annotated with #Transactional(propagation = Propagation.MANDATORY) while the isExistingEmail method does not have any annotations.
My problem is that when I post the signup form twice in succession, I receive a unique constraint error from the database since both times the userRepository.isExistingEmail call returns false. However, what I would expect is that the second signup call is not allowed to add the user to the repository, as I set the isolation level of the transaction to serializable.
Is this the expected behavior or might there be a JOOQ/spring transactions configuration issue?
I added a TransactionSynchronizationManager.isActualTransactionActive() call in the service to make sure a transaction is actually active. So this part seems to work.
After some more research and reading the documentation on transaction isolation in the postgres manual I have come to realize that my understanding of spring managed transactions was just lacking.
When setting the isolation level to SERIALIZABLE postgres won't really block any concurrent transactions. Instead it will make use of predicate locks to monitor if a committed transaction would produce a result that is different than actually running concurrent transactions one after another.
An exception will only be thrown by the underlying database driver if the state of the data is not valid when the second transaction tries to commit. I was able to verify this behavior and to force a serialization failure by temporarily removing the unique constraint on my e-mail field.
My final solution was to reduce the isolation level to READ_COMMITTED and to handle the unique constraint violation exception when invoking userRepository.add(userDto), since a SERIALIZABLE isolation level is not really necessary to deal with this particular use case.
Please let me know of any better ways of dealing with this kind of standard situation.

Inject Session object to DAO bean instead of Session Factory?

In our application we are using Spring and Hibernate.
In all the DAO classes we have SessionFactory auto wired and each of the DAO methods are calling getCurrentSession() method.
Question I have is why not we inject Session object instead of SessionFactory object in prototype scope? This will save us the call to getCurrentSession.
I think the first method is correct but looking for concrete scenarios where second method will throw errors or may be have bad performance?
When you define a bean as prototype scope a new instance is created for each place it needs to be injected into. So each DAO will get a different instance of Session, but all invocations of methods on the DAO will end up using the same session. Since session is not thread safe it should not be shared across multiple threads this will be an issue.
For most situations the session should be transaction scope, i.e., a new session is opened when the transaction starts and then is closed automatically once the transaction finishes. In a few cases it might have to be extended to request scope.
If you want to avoid using SessionFactory.currentSession - then you will need to define your own scope implementation to achieve that.
This is something that is already implemented for JPA using proxies. In case of JPA EntityManager is injected instead of EntityManagerFactory. Instead of #Autowired there is a new #PersistenceContext annotation. A proxy is created and injected during initialization. When any method is invoked the proxy will get hold of the actual EntityManager implementation (using something similar to SessionFactory.getCurrentSession) and delegate to it.
Similar thing can be implemented for Hibernate as well, but the additional complexity is not worth it. It is much simpler to define a getSession method in a BaseDAO which internally call SessionFactory.getCurrentSession(). With this the code using the session is identical to injecting session.
Injecting prototype sessions means that each one of your DAO objects will, by definition, get it's own Session... On the other hand SessionFactory gives you power to open and share sessions at will.
In fact getCurrentSession will not open a new Session on every call... Instead, it will reuse sessions binded to the current session context (e.g., Thread, JTA Transacion or Externally Managed context).
So let's think about it; assume that in your business layer there is a operation that needs to read and update several database tables (which means interacting, directly or indirectly, with several DAOs)... Pretty common scenario right? Customarily when this kind of operation fails you will want to rollback everything that happened in the current operation right? So, for this "particular" case, what kind of strategy seems appropriate?
Spanning several sessions, each one managing their own kind of objects and bound to different transactions.
Have a single session managing the objects related to this operation... Demarcate the transactions according to your business needs.
In brief, sharing sessions and demarcating transactions effectively will not only improve your application performance, it is part of the functionality of your application.
I would deeply recommend you to read Chapter 2 and Chapter 13 of the Hibernate Core Reference Manual to better understand the roles that SessionFactory, Session and Transaction plays within the framework. It will also teach will about Units of work as well as popular session patterns and anti-patterns.

Spring and JUnit, the difference of annotating classes and methods with #Transaction?

I would like to understand that if I annotate my junit class with #Transactional, Spring will create only one transaction which will be shared among my #Test methods and rolled back at the end. Whereas if instead I mark each single #Test with #Transactional, a new transaction will be created and rolled back on a #Test basis. I didn't quite find the expected behaviour in the official documentation (link).
Putting #Transactional on a class level is equivalent to putting it on each of the test methods.
I don't think there is an easy way of achieving your first scenario, ie, a single transaction for all the tests. I am not sure it would make much sense anyway since tests will be executed in a random order so you cannot rely on seeing each others modifications. Of course you can always explicitly call your methods from a single uber-test with a single transaction.
#Transactional at JUnit test case class level will start new transaction before each test method and roll it back afterwards.
You cannot easily start new transaction at the beginning of test class and keep it open for the whole class, at least Spring is not supporting this. Your first workaround would be to use #BeforeClass/#AfterClass pair, but they must be static thus you don't have access to transactional manager.
But first ask yourself a question, why do you want to do this? Sounds like one test depends on the output or database side effects of the other. This is a big anti-pattern in testing, not to mention JUnit does not guarantee the order in which test methods are executed. Maybe you just need a database setup before each test case? #Before and #After are executed within the context of Spring transaction, so you can use them there.

Resources