I develop and application that uses JPA and stateless EJBs. Basically, the application includes EJBs that are responsible for implementing business cases and those responsible for fetching and removing data from the underlying database.
Example:
public interface UserContextAccessEJBLocal {
/**
* Persists the passed instance of {#link UserContex}.
*
* #param userContext an instance of {#link UserContext}
* #throws NullPointerException if userContext is null.
* #throws IOException if an I/O related error occurs.
*/
void remove(UserContext userContext)
throws IOException;}
My question: If a JPA entity is fetched in an EJB A and passed to an EJB B, can I assume that the passed instance belongs to the persistence context managed by the EntityManager that was injected into B or do I have to extract its ID an refetch it?
Is there a difference between stateless and stateful EJBs in regard to my question?
What you should avoid is passing around entities between different transactions, but passing entities between EJB is not a problem as long as the call happens as part of the same transaction as the call that loads the entity.
To help you further we would need a concrete example of how and especially why you pass entities around between EJB, maybe this is not even necessary in the first place.
Related
I am using EntityManger for the retrieval queries in my spring boot application. An example snippet is given below.
#Repository
public class EntityManagerUtil {
#PersistenceContext
private EntityManager entityManager;
public List<Employee> getAll(){
// Use entityManager here
}
}
I need some clarity on the below questions.
a. Is EntityManger will be created for every call (or) will it be a singleton? (I tried printing entityManager, it is returning the same object for all the calls)
b. Is this approach leads to any connection leaks?
c. Do I need to put #Transactional annotation on reading operations?
I am using spring boot version 2.0.3-release
a) See M Deinum's comments about the injected EntityManager being a proxy. The normal situation for JPA is to have an EntityManager tied to each thread, Spring sets this up for you and provides a proxy to delegate to the actual EntityManager. So this arrangement is safe.
b) Connection leaks happen when you don't return a database connection to the pool. Spring takes care of providing a connection and cleaning up resources so in the normal course of things you shouldn't get connection leaks. If you ignore the facilities Spring gives you like JdbcTemplate and insist on writing your own JDBC code, then you can cause resources to leak. If your code tries to use an EntityManager intended for another thread you will get an exception, it will not fail silently, and it will not (in itself) cause a connection leak.
c) You are using an RDBMS so you will have a transaction.
Apparently some people think transactions are optional, but they really aren't. If you don't specify them then Spring will wrap each operation in its own transaction. Not only are you just better off in general being explicit about this, but you can mark the transaction as readonly, which lets Spring perform some optimizations, see Oliver Drotbohm's answer for details. So maybe you can get by without it for some cases (such as, each http request results in only one database operation), but it is in your interest to use it.
I am using Hibernate4 but not Spring. In the application I am developing I want to log a record of every Add, Update, Delete to a separate log table. As it stands at the moment my code does two transactions in sequence, and it works, but I really want to wrap them up into one transaction.
I know Hibernate does not support nested transactions, only in conjunction with Spring framework. I´ve read about savepoints, but they´re not quite the same thing.
Nothing in the standards regarding JPA and JTA specification has support for nested transactions.
What you most likely mean with support by spring is #Transactional annotations on multiple methods in a call hierarchie. What spring does in that situation is to check is there an ongoing transaction if not start a new one.
You might think that the following situation is a nested transaction.
#Transactional
public void method1(){
method2(); // method in another class
}
#Transactional(propagation=REQUIRES_NEW)
public void method2(){
// do something
}
What happens in realitiy is simplified the following. The type of transactionManager1 and transactionManager2 is javax.transaction.TransactionManager
// call of method1 intercepted by spring
transactionManager1.begin();
// invocation of method1
// call of method 2 intercepted by spring (requires new detected)
transactionManager1.suspend();
transactionManager2.begin();
// invocation of method2
// method2 finished
transactionManager2.commit();
transactionManager1.resume();
// method1 finished
transactionManager1.commit();
In words the one transaction is basically on pause. It is important to understand this. Since the transaction of transactionManager2 might not see changes of transactionManager1 depending on the transaction isolation level.
Maybe a little background why I know this. I've written a prototype of distributed transaction management system, allowing to transparently executed methods in a cloud environment (one method gets executed on instance, the next method might be executed somewhere else).
Some systme info, it's using Spring, Spring Data, JPA and Hibernate.
My question is, what's the difference between the 2? See below:
//this uses plain JPA, and uses Spring's Transactional
#Transactional(isolation=Isolation.READ_UNCOMMITTED)
public List getData() {
//code
}
//this uses Spring Data
#Lock(LockModeType.None)
public List getData() {
//code
}
Locking and Transaction are two different things, but anyway...
This isolation level allows dirty reads. One transaction may see uncommitted changes made by some other transaction.
/**
* A constant indicating that dirty reads, non-repeatable reads and phantom reads
* can occur. This level allows a row changed by one transaction to be read by
* another transaction before any changes in that row have been committed
* (a "dirty read"). If any of the changes are rolled back, the second
* transaction will have retrieved an invalid row.
* #see java.sql.Connection#TRANSACTION_READ_UNCOMMITTED
*/
READ_UNCOMMITTED
LockModeType
Lock modes can be specified by means of passing a LockModeType
argument to one of the EntityManager methods that take locks (lock,
find, or refresh) or to the Query.setLockMode() or
TypedQuery.setLockMode() method.
Lock modes can be used to specify either optimistic or pessimistic
locks.
So, as it said LockModeType.NONE is default for annotations (JPA, annotations left and right) I guess when you use EntityManager.find(Class, Object) the default LockModeType is used.
I suggest to you this link to see some examples to Pessimistic and Optimistic lock, and this link to see Transactional examples
In our application we are using Spring and Hibernate.
In all the DAO classes we have SessionFactory auto wired and each of the DAO methods are calling getCurrentSession() method.
Question I have is why not we inject Session object instead of SessionFactory object in prototype scope? This will save us the call to getCurrentSession.
I think the first method is correct but looking for concrete scenarios where second method will throw errors or may be have bad performance?
When you define a bean as prototype scope a new instance is created for each place it needs to be injected into. So each DAO will get a different instance of Session, but all invocations of methods on the DAO will end up using the same session. Since session is not thread safe it should not be shared across multiple threads this will be an issue.
For most situations the session should be transaction scope, i.e., a new session is opened when the transaction starts and then is closed automatically once the transaction finishes. In a few cases it might have to be extended to request scope.
If you want to avoid using SessionFactory.currentSession - then you will need to define your own scope implementation to achieve that.
This is something that is already implemented for JPA using proxies. In case of JPA EntityManager is injected instead of EntityManagerFactory. Instead of #Autowired there is a new #PersistenceContext annotation. A proxy is created and injected during initialization. When any method is invoked the proxy will get hold of the actual EntityManager implementation (using something similar to SessionFactory.getCurrentSession) and delegate to it.
Similar thing can be implemented for Hibernate as well, but the additional complexity is not worth it. It is much simpler to define a getSession method in a BaseDAO which internally call SessionFactory.getCurrentSession(). With this the code using the session is identical to injecting session.
Injecting prototype sessions means that each one of your DAO objects will, by definition, get it's own Session... On the other hand SessionFactory gives you power to open and share sessions at will.
In fact getCurrentSession will not open a new Session on every call... Instead, it will reuse sessions binded to the current session context (e.g., Thread, JTA Transacion or Externally Managed context).
So let's think about it; assume that in your business layer there is a operation that needs to read and update several database tables (which means interacting, directly or indirectly, with several DAOs)... Pretty common scenario right? Customarily when this kind of operation fails you will want to rollback everything that happened in the current operation right? So, for this "particular" case, what kind of strategy seems appropriate?
Spanning several sessions, each one managing their own kind of objects and bound to different transactions.
Have a single session managing the objects related to this operation... Demarcate the transactions according to your business needs.
In brief, sharing sessions and demarcating transactions effectively will not only improve your application performance, it is part of the functionality of your application.
I would deeply recommend you to read Chapter 2 and Chapter 13 of the Hibernate Core Reference Manual to better understand the roles that SessionFactory, Session and Transaction plays within the framework. It will also teach will about Units of work as well as popular session patterns and anti-patterns.
Just a little background , I'm a new developer who has recently taken over a major project after the senior developer left the company before I could develop a full understanding of how he structured this. I'll try to explain my issue the best I can.
This application creates several MessageListner threads to read objects from JMS queues. Once the object is received the data is manipulated based on some business logic and then mapped to a persistence object to be saved to an oracle database using a hibernate EntityManager.
Up until a few weeks ago there hasn't been any major issues with this configuration in the last year or so since I joined the project. But for one of the queues (the issue is isolated to this particular queue), the spring managed bean that processes the received object hangs at the method below. My debugging has led me to conclude that it has completed everything within the method but hangs upon completion. After weeks of trying to resolve this I'm at end of my rope with this issue. Any help with this would be greatly appreciated.
Since each MessageListner gets its own processor, this hanging method only affects the incoming data on one queue.
#Transactional(propagation = Propagation.REQUIRES_NEW , timeout = 180)
public void update(UserRelatedData userData, User user,Company company,...)
{
...
....
//business logic performed on user object
....
......
entityMgr.persist(user);
//business logic performed on userData object
...
....
entityMgr.persist(userData);
...
....
entityMgr.flush();
}
I inserted debug statements just to walk through the method and it completes everything including entityMgr.flush.().
REQUIRES_NEW may hang in test context because the transaction manager used in unit testing doesn't support nested transactions...
From the Javadoc of JpaTransactionManager:
* <p>This transaction manager supports nested transactions via JDBC 3.0 Savepoints.
* The {#link #setNestedTransactionAllowed "nestedTransactionAllowed"} flag defaults
* to {#code false} though, since nested transactions will just apply to the JDBC
* Connection, not to the JPA EntityManager and its cached entity objects and related
* context. You can manually set the flag to {#code true} if you want to use nested
* transactions for JDBC access code which participates in JPA transactions (provided
* that your JDBC driver supports Savepoints). <i>Note that JPA itself does not support
* nested transactions! Hence, do not expect JPA access code to semantically
* participate in a nested transaction.</i>
So clearly if you don't call (#Java config) or set the equivalent flag in your XML config:
txManager.setNestedTransactionAllowed(true);
or if your driver doesn't support Savepoints, it's "normal" to get problem with REQUIRES_NEW...
(Some may prefer an exception "nested transactions not supported")
This kind of problems can show up when underlying database has locks from uncommitted changes.
What I would suspect is some other code made inserts/deletes on userData table(s) outside transaction or in a transaction which takes very long time to execute since it's a batch job or similar. You should analyze all the code referring to these tables and look for missing #Transactional.
Beside this answer, you may also check for the isolation level of your transaction — perhaps it's too restrictive.
Does the update() method hang forever, or does it throw an exception when the timeout elapses?
Unfortunately I have the same problem with Propagation.REQUIRES_NEW. Removing it resolves the problem. The debugger shows me that the commit method is hanging (invoked from #Transactional aspect implementation).
The problem appears only in the test spring context, when the application is deployed to the application server it works fine.