I've got a question about working with entities which were received from db.
Currently I've a lot of operations, where I need to get entities from db, and pass them to another service. Simplified version of such code are is like this:
List<Entity> list;
using(var session = SessionFactory.OpenSession())
{
list = Session.QueryOver<Entity>.Future().ToList();
}
So now I don't know, if list of objects isn't disposed for a long time, will it cause memory lear accordint to stored sessions. Does nhibernate sessions exist while exist objects which were received during the session?
Update:
Found some session setting Session.ActiveEntityMode - POCO, does it solves my problem?
the session is disposed as soon as the using ends. All entities loaded are still valid except not initialized lazyloaded collections/references/properties.
Also the Future in Session.QueryOver<Entity>.Future().ToList(); is a noop when there are no other operations befor which have Future/futurevalue on them.
Related
I have a problem with Hibernate Enverse (Version 5.2.0-Final).
Context:
I'm auditing some entities with some lazy relations. I have a jsf-page that loads one version of one entity with all relations of that version. That works fine. So now I have a page that shows a revision of the entity with all relations of that revision. On this page I can open a fieldset, that triggers an AJAX. In this request we reattach all relations by calling entityManager.merge(entity) to be able to fetch the lazy relations in this fieldset. (The EntityManager is RequestScoped)
The Problem:
The AJAX is a new request. The server calls entityManager.merge(entity), what enforces creation of a new EntityManager (So a new org.hibernate.internal.SessionImpl is created). On this object hibernate calls SessionImpl.merge(...). But in the method org.hibernate.internal.AbstractSharedSessionContract.createQuery(String) a other SessionImpl object is used, which is already closed in the request before. That enforces an java.lang.IllegalStateException: Session/EntityManager is closed.
In one sentence: Although a new entityManager was created and a merge was called on that new entityManger, Hibernate uses an old Session/EntityManager of the request before.
I debugged the problem and found following:
Debug1: Shows the Stacktrace of the SessionImpl.merge(...) with the session's object id
Debug2: Shows the last method with the correct SessionImpl object (see it's id). This object is not used in next methods.
Debug3: The step after Debug2 does not know the given SessionImpl object. It has it's own SessionImpl object in collection.initializor.versionsReader. This session was created and closed in the request before (on loading the page).
Debug4: Now Hibernate wants to create the query wit the closed SessionImpl
Debug5: This enforces the exception, as the session is closed.
My questions:
Is this a bug of Hibernate?
Why is the given SessionImpl in method org.hibernate.type.CollectionType.getElementIterater(...) not used?
Anyone knows a solution or workaround for this problem?
Tank you very much for any idea. I spent days on this bug.
Why is the Session arg in o.h.type.CollectionType.getElementIterator not used?
The short answer is it isn't required, its simply a backward compatibility concern from 8 years ago.
The long answer is the type-system used to actually deviate some behavior based on whether or not the user had specified the session to operate in EntityMode.MAP or EntityMode.POJO and therefore the types needed to know what mode the session was in; hence why it was passed.
But even back in 2011 when this was changed, the session argument only ever influenced behavior if and only if the session was operating in EntityMode.MAP. In other words, all other modes always routed directly to the underlying collections Collection#iterator() method.
All this aside however, this doesn't have any impact on what you experience in your Debug3 screen-shot.
Is this a bug in Hibernate?
No, based on what I have read, I believe you're mixing concerns.
In Hibernate (no Envers), you can basically do this
// Request 1
request1EntityManager = getEntityManager();
sessionScopeEntity = request1EntityManager.find( MyEntity.class, myEntityId );
// Request 2
request2EntityManager = getEntityManager();
sessionScopeEntity = request2EntityManager.merge( sessionScopeEntity );
for ( SomeCollectionItem Item : sessionScopeEntity.getSomeCollection() ) {
// do things here
}
The above works because you reassociate the entity with the new session which in-turn injects the session into all the uninitialized proxies the entity maintains. But you can also rewrite the above as
// Request 1
request1EntityManager = getEntityManager();
sessionScopeEntity = request1EntityManager.find( MyEntity.class, myEntityId );
sessionScopeEntity.getSomeCollection().size() // initialize collection w/request1Session
// Request 2
request2EntityManager = getEntityManager();
for ( SomeCollectionItem Item : sessionScopeEntity.getSomeCollection() ) {
// do things here
}
The difference is the collection gets initialized with the first session and therefore when you attempt to access it with the second session, the entity doesn't necessarily need a merge because the collection is no longer a proxy but actually populated like a normal fetched collection would be.
The major difference between an entity instance returned by Hibernate and an audited entity instance returned by Envers is that the audited entity instance is NOT a managed persistent entity.
Depending on your scenario, you may decide to only audit a subset of fields on an entity mapping. This is why you cannot nor should not use things like merge with that instance as it could easily lead to unintended side effects with your real data.
If you intend to pass the audited entity instance across sessions, i would highly suggest that you instead consider initializing the collections you need up-front with the first session where you fetched the instance because presently there is no way to re-associate an audited entity instance with a new session.
We are using Realm in a Xamarin app and have some issues refreshing the local database based on a remote source. Data is fetched from a remote endpoint and stored locally using Realm for easier/faster access.
Program flow is as follows:
Fetch data from remote source (if possible).
Loop through the entities returned by the remote source while keeping track of the IDs we've seen so far. New or updated entities are written to Realm.
Loop through the set of locally stored entities, removing entities we haven't seen in step 2 with Realm.Remove(entity); (in a transaction)
Return Realm.All<Entity>();
Unfortunately, the entities are returned by step 4 before all "remove" operations have been written. As a result, it takes a couple of refreshes before the local database is completely in sync.
The remove operation is done as follows:
foreach (Entity entity in realm.All<Entity>())
{
if (seenIds.Contains(entity.Id))
{
continue;
}
realm.Write(() => {
realm.Remove(entity);
});
}
Is there a way to have Realm wait till the transaction is completed, before returning the Realm.All<Entity>();?
I am pretty sure this is not particularly a Realm issue - the same pattern would cause problems with a lot of enumerable, mutable containers. You are removing items from a list whilst iterating it so enumeration is moving on too far.
There is no buffering on Realm transactions so I guarantee it is not about have Realm wait till the transaction is completed but is your list logic.
There are two basic ways to do this differently:
Use ToList to get a list of all objects from the All - this is expensive if many objects because you will instantiate all the objects.
Instead of removing objects inside the loop, add them to a list of items to be removed then iterate that list.
Note that using a transaction per-remove, as you are doing with Write here is relatively slow. You can do many operations in one transaction.
We are also working on other improvements to the Realm API that might give a more efficient way of handling this. It would be very helpful to know the relative data sizes - the number of removals vs records in the loop. We love getting sample data and schemas (can send privately to help#realm.io).
an example of option 2:
var toDelete = new List<Entity>();
foreach (Entity entity in realm.All<Entity>())
{
if (!seenIds.Contains(entity.Id))
toDelete.Add(entity);
}
realm.Write(() => {
foreach (Entity entity in toDelete))
realm.Remove(entity);
});
My problem is straightforward. I want to access some data from the database when the application loads on Tomcat. To do something at that point in time I use #PostConstruct (which does its job properly).
However, in that method I make 2 separate connections to the DB: one for bringing a list of entities and another for adding them into a common library. The second step implies some behind-the-scenes queries for resolving some lazy-loading associations. Here is the code snippet:
#Override
#PostConstruct
public void populateLibrary() {
// query for the Book Descriptors - 1st query works!!!
List<BookDescriptor> bookDescriptors= bookDescriptorService.list();
Session session = sessionFactory.openSession();
Transaction transaction = null;
try {
transaction = session.beginTransaction();
// resolving some lazy-loading associations - 2nd query fails!!!
for (BookDescriptor book: bookDescriptors) {
library.addEntry(book);
}
transaction.commit();
} catch (HibernateException e) {
transaction.rollback();
e.printStackTrace();
} finally {
session.close();
}
}
1st query works while the 2nd fails, as I wrote in the comments. The failure gives:
org.hibernate.LazyInitializationException: could not initialize proxy - no Session
at org.hibernate.proxy.AbstractLazyInitializer.initialize(AbstractLazyInitializer.java:86)
at org.hibernate.proxy.AbstractLazyInitializer.getImplementation(AbstractLazyInitializer.java:140)
at org.hibernate.proxy.pojo.javassist.JavassistLazyInitializer.invoke(JavassistLazyInitializer.java:190)
at com.freightgate.domain.SecurityFiling_$$_javassist_7.getSfSubmissionType(SecurityFiling_$$_javassist_7.java)
at com.freightgate.dao.SecurityFilingTest.test(SecurityFilingTest.java:73)
Which is very odd since I explicitly opened and closed a transaction. However, if I inspect some details of how the 1st query works it seems like behind the scenes the session is bound to AbstractLazyInitializer class.
I resolved my problem by abstracting away the functionality from the for loop into a separate service class that is annotated with #Transactional(readOnly = true). Still I'm puzzled as to why the approch that I posted here fails.
If anyone has some hints, I'd be very happy to hear them.
You load entities in a first session, then close this session, then open a new session, and try to lazy-load collections of the entities. That can't work.
For lazy-loading to work, the entity must be attached to an open session. Just opening another session doesn't make any entity you have loaded before attached to this new session. In the meantime, some other transaction could have radically changed the database, the entity could not exist anymore...
The best solution is what you have done. Encapsulate evrything into a single transactional service. You could also have open the transaction before calling the first service, but why handle transactions programmatically, since Spring does it for you declaratively?
I'm using the TransactionScope class within a project based on Silverlight and RIA services. Each time I need to save some data, I create a TransactionScope object, save my data using Oracle ODP, then call the Complete method on my TransactionScope object and dispose the object itself:
public override bool Submit(ChangeSet changeSet)
{
TransactionOptions txopt = new TransactionOptions();
txopt.IsolationLevel = IsolationLevel.ReadCommitted;
using (TransactionScope tx = new TransactionScope(TransactionScopeOption.Required, txopt))
{
// Here I open an Oracle connection and fetch some data
GetSomeData();
// This is where I persist my data
result = base.Submit(changeSet);
tx.Complete();
}
return result;
}
My problem is, the first time I get the Submit method to be called, everything is fine, but if I call it a second time, the execution gets stuck for a couple of minutes after the call to Complete (so, when disposing tx), then I get the Oracle error "ORA-12154". Of course, I already checked that my persistence code completes without errors. Any ideas?
Edit: today I repeated the test and for some reason I'm getting a different error instead of the Oracle exception:
System.InvalidOperationException: Operation is not valid due to the current state of the object.
at System.Transactions.TransactionState.ChangeStatePromotedAborted(InternalTransaction tx)
at System.Transactions.InternalTransaction.DistributedTransactionOutcome(InternalTransaction tx, TransactionStatus status)
at System.Transactions.Oletx.RealOletxTransaction.FireOutcome(TransactionStatus statusArg)
at System.Transactions.Oletx.OutcomeEnlistment.InvokeOutcomeFunction(TransactionStatus status)
at System.Transactions.Oletx.OletxTransactionManager.ShimNotificationCallback(Object state, Boolean timeout)
at System.Threading._ThreadPoolWaitOrTimerCallback.PerformWaitOrTimerCallback(Object state, Boolean timedOut)
I somehow managed to solve this problem, although I still can't figure out the reason it showed up in the first place: I just moved the call to GetSomeData outside the scope of the distributed transaction. Since the call to Submit may open many connections and perform any kind of operations on the DB, I just can't tell why GetSomeData was causing this problem (it just opens a connection, calls a very simple stored function and returns a boolean). I can only guess that has something to do with the implementation of the Submit method and/or with the instantiation of multiple oracle connections within the same transaction scope.
The following code snippet works fine with SQL Server 2008 (SP1) but with Oracle 11g the call to session.BeginTransaction() throws an exception with the message ‘Connection is already part of a local or a distributed transaction’ (stack trace shown below). Using the '"NHibernate.Driver.OracleDataClientDriver".
Has anyone else run into this?
using (var scope = new TransactionScope())
{
using (var session = sessionFactory.OpenSession())
using (var transaction = session.BeginTransaction())
{
// do what you need to do with the session
transaction.Commit();
}
scope.Complete();
}
Exception at: at NHibernate.Transaction.AdoTransaction.Begin(IsolationLevel isolationLevel)
at NHibernate.Transaction.AdoTransaction.Begin()
at NHibernate.AdoNet.ConnectionManager.BeginTransaction()
at NHibernate.Impl.SessionImpl.BeginTransaction()
at MetraTech.BusinessEntity.DataAccess.Persistence.StandardRepository.SaveInstances(List`1& dataObjects) in S:\MetraTech\BusinessEntity\DataAccess\Persistence\StandardRepository.cs:line 3103
Inner error message was: Connection is already part of a local or a distributed transaction
Inner exception at: at Oracle.DataAccess.Client.OracleConnection.BeginTransaction(IsolationLevel isolationLevel)
at Oracle.DataAccess.Client.OracleConnection.BeginDbTransaction(IsolationLevel isolationLevel)
at System.Data.Common.DbConnection.System.Data.IDbConnection.BeginTransaction()
at NHibernate.Transaction.AdoTransaction.Begin(IsolationLevel isolationLevel)
The problem with using only the transaction scope is outlined here:
NHibernate FlushMode Auto Not Flushing Before Find
It appears nhibernate (v3.1 with oracle dialect and 11g db w/opd.net v2.112.1.2) requires it's own transactions to avoid the flushing issue but I haven't been able to get the transaction scope to work with the nhibernate transactions.
I can't seem to get it to work :(
this might be a defect in nhibernate or odp.net, not sure...
found same problem here:
NHibernate 3.0: TransactionScope and Auto-Flushing
FIXED: found a solution! by putting "enlist=dynamic;" into my oracle connection string, the problem was resolved. I have been able to use both the nhibernate transaction (to fix the flush issue) and the transaction scope like so:
ISessionFactory sessionFactory = CreateSessionFactory();
using (TransactionScope ts = new TransactionScope())
{
using (ISession session = sessionFactory.OpenSession())
using (ITransaction tx = session.BeginTransaction())
{
//do stuff here
tx.Commit();
}
ts.Complete();
}
I checked my log files and found this:
2011-06-27 14:03:59,852 [10] DEBUG NHibernate.Impl.AbstractSessionImpl - enlisted into DTC transaction: Serializable
before any SQL was executed on the connection. I will unit test to confirm proper execution. I'm not too sure what serializable is telling me though
Brads answer, using an outer TransactionScope and an inner NHibernate transaction with enlist=dynamic, doesn't seem to work properly. Ok, the data gets committed.
But if you omit the scope.Complete() or raise an exception after tx.Commit() the data still gets committed (for Oracle)! However, for some reason this works for SQL-Server.
NHibernate transactions take care of auto-flush but in the end they call the underlying ADO.NET transaction. While many sources encourage the above pattern as best practice for NHibernate to solve the auto-flush issue, sources discussing native ADO.NET say the contrary: Do NOT use TransactionScope and inner transactions together, not for Oracle and not for SQL-Server. (See this question and my answer)
My conclusion: Do not combine TransactionScope and NHibernate transactions. To use TransactionScope, skip NHibernate transactions and handle the flushing manually (see also NHibernate Flush doc).
One question, why are you doing the inner session.BeginTransaction - since 2.1 GA NHibernate will automatically enroll into TransactionScope contexts so there's no reason to do your own anymore.
From NHibernate cookbook
Remember that NHibernate requires an NHibernate transaction when interacting with the database. TransactionScope is not a substitute. As illustrated in the next image, the TransactionScope should completely surround both the session and NHibernate transaction. The call to TransactionScope.Complete() should occur after the session has been disposed. Any other order will most likely lead to nasty, production crashing bugs like connection leaks.
My opinion is also that it should work with TransactionScope along, but it does not, neither in 3.3.x.x neither in 4.0.0.400 version.
The recipe above may work, but need to test it with nested TrancactionScope, with inner TransactionScope that has a Transaction.Suppress defined (when using SQL), etc...