How to deal with Spring hibernate no lock aquired exception inside a transaction - spring

I have applied #Transactional in my interface, and inside my serviceImpl, the corresponding method is calling some other methods, one method is reading, another method is writing. Although I have anotated as Transactional, when I am giving concurrent request, my insert method is throwing org.hibernate.exception.LockAcquisitionException: error.
Another problem is, this insert method is a shared method and it performs the insert method like Dao.save(obj) . Dao.save() is a generic method So i can not do anything here. I have to apply something on interface to avoid no lock aquired exception.
Is it possible to tell wait untill lock is aquired? Or retry if transaction is failed? Or lock all the tables until the transaction is completed so that another request can not access the relevent resources?
My hibernate version is 3.x, And database is mysql 5.6

The best way to do this is,
Mark your methods transactional
In your mysql database settings set the transaction isolation level as SERIALIZABLE

Related

Perform JSR 303 validation in transactional service call

I am using the play framework to develop a web application which accesses a postgres db using JOOQ and spring transactions.
Currently I am implementing the user signup which is structured in the following way:
The user posts the signup form
The request is routed to a controller action which maps all parameters like e-mail, password etc. on a DTO. The different fields of the DTO are annotated with JSR 303 constraints.
The e-mail field is annotated with a constraint validator that makes sure that the same address is not added twice. This validator has an autowired reference to the UserRepository, so that it can invoke it's isExistingEmail method.
The signup method of the user service is called, which basically looks as follows:
#Transactional(isolation = Isolation.SERIALIZABLE)
public User signupUser(UserDto userDto) {
validator.validate(userDto);
userRepository.add(userDto);
return tutor;
}
In case of a validation error the validator.validate(userDto) call inside of the service will throw a RuntimeException.
Please note that the repository's add method is annotated with #Transactional(propagation = Propagation.MANDATORY) while the isExistingEmail method does not have any annotations.
My problem is that when I post the signup form twice in succession, I receive a unique constraint error from the database since both times the userRepository.isExistingEmail call returns false. However, what I would expect is that the second signup call is not allowed to add the user to the repository, as I set the isolation level of the transaction to serializable.
Is this the expected behavior or might there be a JOOQ/spring transactions configuration issue?
I added a TransactionSynchronizationManager.isActualTransactionActive() call in the service to make sure a transaction is actually active. So this part seems to work.
After some more research and reading the documentation on transaction isolation in the postgres manual I have come to realize that my understanding of spring managed transactions was just lacking.
When setting the isolation level to SERIALIZABLE postgres won't really block any concurrent transactions. Instead it will make use of predicate locks to monitor if a committed transaction would produce a result that is different than actually running concurrent transactions one after another.
An exception will only be thrown by the underlying database driver if the state of the data is not valid when the second transaction tries to commit. I was able to verify this behavior and to force a serialization failure by temporarily removing the unique constraint on my e-mail field.
My final solution was to reduce the isolation level to READ_COMMITTED and to handle the unique constraint violation exception when invoking userRepository.add(userDto), since a SERIALIZABLE isolation level is not really necessary to deal with this particular use case.
Please let me know of any better ways of dealing with this kind of standard situation.

Spring jdbcTemplate Rollback for multiple database operations

I have a program that uses handler, businessObject and DAO for program execution. Control starts from handler to businessObject and finally to DAO for Database operations.
For example my program does 3 operations: insertEmployee(), updateEmployee() and deleteEmployee() every method being called one after the other from handler. once insertEmployee() called control get back to handler then it calls updateEmployee() again control back to handler then it calls deleteEmployee().
Problem Statement: If my first two methods in dao are successful and control is back to handler and next method it request to dao is deleteEmployee(). Meanwhile it faces some kind of exception in deleteEmployee(). It should be able to rollback the earlier insertEmployee() and updateEmployee() operation also. It should not rollback only deleteEmployee(). It should behave as this program never ran in system.
Can any one point me how to achieve this in spring jdbcTemplate Transaction management.
You should check about transaction propagation, in special: PROPAGATION_REQUIRED.
More info:
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/transaction.html#tx-propagation

Does Hibernate 1st Cache work with HibernateTemplate?

I am trying to dive a little deeper into Hibernate cache and Spring's HibernateTemplate, and I am kinda confused by the following questions:
1) How does HibernateTemplate manages the Hibernate session? If I invokes methods such as "getHibernateTemplate().find" and "getHibernateTemplate().execute()", will everytime HibernateTemplate opens a new Hibernate session to process?
2) Hibernate 1st cache works in the scope of Hibernate session. In this regards, if HibernateTemplate always open a new session to exeucte/find, then does it mean the Hibernate 1st cache is NOT usable with HibernateTemplate? (because the cached object will be destroyed anyway, and the next find() has to make a fresh query to get everything again from DB)
3) It seems Hibernate 1st cache holds a map of all objects fetched during the life of a session. In this case, if I query a object which had been fetched before in the same session, then I should get the object and all its data directly from the cache? In this regards, what happens if the data of this object has been modified in the database?
4) Hibernate 1st cache returns the data in format of objects, in this regards, if I use HQL to fetch only a few columns (attributes) from a table (object), will those data (objects with only part of its attributes being filled) also be cached?
Thanks a lot!
-------------------------- Additional Info --------------------------
Thanks to the hint from Alessio, I re-checked the Spring specification and the following is my understanding:
If I call the getHibernateTemplate() within a existing transaction block (e.g., behind "session.beginTransaction();"), then HibernateTemplate will use the existing transaction to execute.
If I call getHibernateTemplate() with no transaction in current thread, then getHibernateTemplate() will actually call "openSession()" instead of "getCurrentSession()" because there is NO transaction (even when openSession() had been called before and an opened session was already bound to current thread), and a new session will be created and given to getHibernateTemplate(). Once the getHibernateTemplate has done its work, the newly created session will be destroyed.
Am my understanding right?
The Spring documentation says the following about session access and creation:
"HibernateTemplate is aware of a corresponding Session bound to the current thread, for example when using HibernateTransactionManager. If allowCreate is true, a new non-transactional Session will be created if none is found, which needs to be closed at the end of the operation. If false, an IllegalStateException will get thrown in this case."
So, whether it creates a new session or not depends on the allowCreate property and the presence of an interceptor that sets up a session for the current thread. Note also that HibernateTemplate is capable of falling back to Hibernate's SessionFactory.getCurrentSession().
-------------------------- Additional Info --------------------------
Edit: to answer to the additional question by the author, the documentation is not very explicit about this, but a session is obtained or created when getSession() is called, which includes calls to execute() of course. A session is not created or accessed in any way when you only instantiate a HibernateTemplate or get it from an application context, as the call to getHibernateTemplate() presumably does.

Inject Session object to DAO bean instead of Session Factory?

In our application we are using Spring and Hibernate.
In all the DAO classes we have SessionFactory auto wired and each of the DAO methods are calling getCurrentSession() method.
Question I have is why not we inject Session object instead of SessionFactory object in prototype scope? This will save us the call to getCurrentSession.
I think the first method is correct but looking for concrete scenarios where second method will throw errors or may be have bad performance?
When you define a bean as prototype scope a new instance is created for each place it needs to be injected into. So each DAO will get a different instance of Session, but all invocations of methods on the DAO will end up using the same session. Since session is not thread safe it should not be shared across multiple threads this will be an issue.
For most situations the session should be transaction scope, i.e., a new session is opened when the transaction starts and then is closed automatically once the transaction finishes. In a few cases it might have to be extended to request scope.
If you want to avoid using SessionFactory.currentSession - then you will need to define your own scope implementation to achieve that.
This is something that is already implemented for JPA using proxies. In case of JPA EntityManager is injected instead of EntityManagerFactory. Instead of #Autowired there is a new #PersistenceContext annotation. A proxy is created and injected during initialization. When any method is invoked the proxy will get hold of the actual EntityManager implementation (using something similar to SessionFactory.getCurrentSession) and delegate to it.
Similar thing can be implemented for Hibernate as well, but the additional complexity is not worth it. It is much simpler to define a getSession method in a BaseDAO which internally call SessionFactory.getCurrentSession(). With this the code using the session is identical to injecting session.
Injecting prototype sessions means that each one of your DAO objects will, by definition, get it's own Session... On the other hand SessionFactory gives you power to open and share sessions at will.
In fact getCurrentSession will not open a new Session on every call... Instead, it will reuse sessions binded to the current session context (e.g., Thread, JTA Transacion or Externally Managed context).
So let's think about it; assume that in your business layer there is a operation that needs to read and update several database tables (which means interacting, directly or indirectly, with several DAOs)... Pretty common scenario right? Customarily when this kind of operation fails you will want to rollback everything that happened in the current operation right? So, for this "particular" case, what kind of strategy seems appropriate?
Spanning several sessions, each one managing their own kind of objects and bound to different transactions.
Have a single session managing the objects related to this operation... Demarcate the transactions according to your business needs.
In brief, sharing sessions and demarcating transactions effectively will not only improve your application performance, it is part of the functionality of your application.
I would deeply recommend you to read Chapter 2 and Chapter 13 of the Hibernate Core Reference Manual to better understand the roles that SessionFactory, Session and Transaction plays within the framework. It will also teach will about Units of work as well as popular session patterns and anti-patterns.

Spring Bean Hangs on Method with #Transactional

Just a little background , I'm a new developer who has recently taken over a major project after the senior developer left the company before I could develop a full understanding of how he structured this. I'll try to explain my issue the best I can.
This application creates several MessageListner threads to read objects from JMS queues. Once the object is received the data is manipulated based on some business logic and then mapped to a persistence object to be saved to an oracle database using a hibernate EntityManager.
Up until a few weeks ago there hasn't been any major issues with this configuration in the last year or so since I joined the project. But for one of the queues (the issue is isolated to this particular queue), the spring managed bean that processes the received object hangs at the method below. My debugging has led me to conclude that it has completed everything within the method but hangs upon completion. After weeks of trying to resolve this I'm at end of my rope with this issue. Any help with this would be greatly appreciated.
Since each MessageListner gets its own processor, this hanging method only affects the incoming data on one queue.
#Transactional(propagation = Propagation.REQUIRES_NEW , timeout = 180)
public void update(UserRelatedData userData, User user,Company company,...)
{
...
....
//business logic performed on user object
....
......
entityMgr.persist(user);
//business logic performed on userData object
...
....
entityMgr.persist(userData);
...
....
entityMgr.flush();
}
I inserted debug statements just to walk through the method and it completes everything including entityMgr.flush.().
REQUIRES_NEW may hang in test context because the transaction manager used in unit testing doesn't support nested transactions...
From the Javadoc of JpaTransactionManager:
* <p>This transaction manager supports nested transactions via JDBC 3.0 Savepoints.
* The {#link #setNestedTransactionAllowed "nestedTransactionAllowed"} flag defaults
* to {#code false} though, since nested transactions will just apply to the JDBC
* Connection, not to the JPA EntityManager and its cached entity objects and related
* context. You can manually set the flag to {#code true} if you want to use nested
* transactions for JDBC access code which participates in JPA transactions (provided
* that your JDBC driver supports Savepoints). <i>Note that JPA itself does not support
* nested transactions! Hence, do not expect JPA access code to semantically
* participate in a nested transaction.</i>
So clearly if you don't call (#Java config) or set the equivalent flag in your XML config:
txManager.setNestedTransactionAllowed(true);
or if your driver doesn't support Savepoints, it's "normal" to get problem with REQUIRES_NEW...
(Some may prefer an exception "nested transactions not supported")
This kind of problems can show up when underlying database has locks from uncommitted changes.
What I would suspect is some other code made inserts/deletes on userData table(s) outside transaction or in a transaction which takes very long time to execute since it's a batch job or similar. You should analyze all the code referring to these tables and look for missing #Transactional.
Beside this answer, you may also check for the isolation level of your transaction — perhaps it's too restrictive.
Does the update() method hang forever, or does it throw an exception when the timeout elapses?
Unfortunately I have the same problem with Propagation.REQUIRES_NEW. Removing it resolves the problem. The debugger shows me that the commit method is hanging (invoked from #Transactional aspect implementation).
The problem appears only in the test spring context, when the application is deployed to the application server it works fine.

Resources