I am using hibernate and spring for my web application.
In this at some places i forgot to commit transaction...like below code
SessionFactory sf = HibernateUtils.getSessionFactory();
session = sf.openSession();
tx = session.beginTransaction();
..........................................Some Code.............................
But forgot to commit transaction.....
finally
{
session.flush();
session.close();
}
Now My question is that :-
Is this creates any problem for me ??
Any issue regarding memory leak ??
Increasing load to database??
Or what is effect of this on my system ??
If you don't commit the transaction, then
The tables involved in the transaction will be locked until the connection gets dropped/closed.
The changes made to the tables as part of the transaction will be available only for those reusing the same connection from the pool and not to others.
If commit() method is never called on such a connection till it gets closed/dropped, then all the changes made will be lost after close/drop of the connection.
Basically, the behavior of your system will be arbitrary which means it's a problem for you. It doesn't cause any memory leak though.
Related
Does Java Connection.close rollback into a finally block?.
I know .Net SqlConnection.close does it.
With this I could make try/finally blocks without catch...
Example:
try {
conn.setAutoCommit(false);
ResultSet rs = executeQuery(conn, ...);
....
executeNonQuery(conn, ...);
....
conn.commit();
} finally {
conn.close();
}
According to the javadoc, you should try to either commit or roll back before calling the close method. The results otherwise are implementation-defined.
In any database system I've worked with, there is no harm in doing a rollback right after the commit, so if you commit in the try block, and rollback in the finally, things get committed, whereas if an exception or early return causes the commit to be missed, the rollback will rollback the transaction. So the safe thing to do is
try {
conn.setAutoCommit(false);
ResultSet rs = executeQuery(conn, ...);
....
executeNonQuery(conn, ...);
....
conn.commit();
} finally {
conn.rollback();
conn.close();
}
Oracle's JDBC driver commits on close() by default. You should not rely on this behaviour if you intend to write multi-platform JDBC code.
The behavior is completely different between different databases. Examples:
Oracle
The transaction is committed when closing the connection with an open transaction (as #Mr. Shiny and New 安宇 stated.
SQL Server
Calling the close method in the middle of a transaction causes the
transaction to be rolled back.
close Method (SQLServerConnection)
For MySQL JDBC, the implementation rolls back the connection if closed without a call to commit or rollback methods.
It is useless to rollback in finally block. After you commit, and commit is successful, why to roll back? So if i were you, i would rollback in catch block.
I am using spring-boot(1.4.1) with hibernate(5.0.1.Final). I noticed that when I try to write to the db from within #TransactionalEventListener handler the call is simply ignored. A read call works just fine.
When I say ignore, I mean there is no write in the db and there are no logs. I even enabled log4jdbc and still no logs which mean no hibernate session was created. From this, I reckon, somewhere in spring-boot we identify that its a transaction event handler and ignore a write call.
Here is an example.
// This function is defined in a class marked with #Service
#TransactionalEventListener
open fun handleEnqueue(event: EnqueueEvent) {
// some code to obtain encodeJobId
this.uploadService.saveUploadEntity(uploadEntity, encodeJobId)
}
#Service
#Transactional
class UploadService {
//.....code
open fun saveUploadEntity(uploadEntity: UploadEntity, encodeJobId: String): UploadEntity {
// some code
return this.save(uploadEntity)
}
}
Now if I force a new Transaction by annotating
#Transactional(propagation = Propagation.REQUIRES_NEW)
saveUploadEntity
a new transaction with connection is made and everything works fine.
I dont like that there is complete silence in logs when this write is dropped (again reads succeed). Is there a known bug?
How to enable the handler to start a new transaction? If I do Propogation.Requires_new on my handleEnqueue event, it does not work.
Besides, enabling log4jdbc which successfully logs reads/writes I have following settings in spring.
Thanks
I ran into the same problem. This behavior is actually mentioned in the documentation of the TransactionSynchronization#afterCompletion(int) which is referred to by the TransactionPhase.AFTER_COMMIT (which is the default TransactionPhase attribute of the #TransactionalEventListener):
The transaction will have been committed or rolled back already, but the transactional resources might still be active and accessible. As a consequence, any data access code triggered at this point will still "participate" in the original transaction, allowing to perform some cleanup (with no commit following anymore!), unless it explicitly declares that it needs to run in a separate transaction. Hence: Use PROPAGATION_REQUIRES_NEW for any transactional operation that is called from here.
Unfortunately this seems to leave no other option than to enforce a new transaction via Propagation.REQUIRES_NEW. The problem is that the transactionalEventListeners are implemented as transaction synchronizations and hence bound to the transaction. When the transaction is closed and its resources cleaned up, so are the listeners. There might be a way to use a customized EntityManager which stores events and then publishes them after its close() was called.
Note that you can use TransactionPhase.BEFORE_COMMIT on your #TransactionalEventListener which will take place before the commit of the transaction. This will write your changes to the database but you won't know whether the transaction you're listening on was actually committed or is about to be rolled back.
My problem is straightforward. I want to access some data from the database when the application loads on Tomcat. To do something at that point in time I use #PostConstruct (which does its job properly).
However, in that method I make 2 separate connections to the DB: one for bringing a list of entities and another for adding them into a common library. The second step implies some behind-the-scenes queries for resolving some lazy-loading associations. Here is the code snippet:
#Override
#PostConstruct
public void populateLibrary() {
// query for the Book Descriptors - 1st query works!!!
List<BookDescriptor> bookDescriptors= bookDescriptorService.list();
Session session = sessionFactory.openSession();
Transaction transaction = null;
try {
transaction = session.beginTransaction();
// resolving some lazy-loading associations - 2nd query fails!!!
for (BookDescriptor book: bookDescriptors) {
library.addEntry(book);
}
transaction.commit();
} catch (HibernateException e) {
transaction.rollback();
e.printStackTrace();
} finally {
session.close();
}
}
1st query works while the 2nd fails, as I wrote in the comments. The failure gives:
org.hibernate.LazyInitializationException: could not initialize proxy - no Session
at org.hibernate.proxy.AbstractLazyInitializer.initialize(AbstractLazyInitializer.java:86)
at org.hibernate.proxy.AbstractLazyInitializer.getImplementation(AbstractLazyInitializer.java:140)
at org.hibernate.proxy.pojo.javassist.JavassistLazyInitializer.invoke(JavassistLazyInitializer.java:190)
at com.freightgate.domain.SecurityFiling_$$_javassist_7.getSfSubmissionType(SecurityFiling_$$_javassist_7.java)
at com.freightgate.dao.SecurityFilingTest.test(SecurityFilingTest.java:73)
Which is very odd since I explicitly opened and closed a transaction. However, if I inspect some details of how the 1st query works it seems like behind the scenes the session is bound to AbstractLazyInitializer class.
I resolved my problem by abstracting away the functionality from the for loop into a separate service class that is annotated with #Transactional(readOnly = true). Still I'm puzzled as to why the approch that I posted here fails.
If anyone has some hints, I'd be very happy to hear them.
You load entities in a first session, then close this session, then open a new session, and try to lazy-load collections of the entities. That can't work.
For lazy-loading to work, the entity must be attached to an open session. Just opening another session doesn't make any entity you have loaded before attached to this new session. In the meantime, some other transaction could have radically changed the database, the entity could not exist anymore...
The best solution is what you have done. Encapsulate evrything into a single transactional service. You could also have open the transaction before calling the first service, but why handle transactions programmatically, since Spring does it for you declaratively?
I am looking to retrofit our existing transaction API to use Spring’s PlatformTransactionManager, such that Spring will manage our transactions. I chained my DataSources as follows:
DataSourceTransactionManager - > LazyConnectionDataSourceProxy - > dbcp.PoolingDataSource - > OracleDataSource
In experimenting with the DataSourceTransactionManager , I have found that where PROPAGATION_REQUIRES_NEW is used, it seems that Spring’s transaction management requires that the transactions be committed/rolled back in LIFO fashion, i.e. you must commit/rollback the most recently created transactions first.
Example:
#Test
public void testSpringTxns() {
// start a new txn
TransactionStatus txnAStatus = dataSourceTxnManager.getTransaction(propagationRequiresNewDefinition); // specifies PROPAGATION_REQUIRES_NEW
Connection connectionA = DataSourceUtils.getConnection(dataSourceTxnManager.getDataSource());
// start another new txn
TransactionStatus txnBStatus = dataSourceTxnManager.getTransaction(propagationRequiresNewDefinition);
Connection connectionB = DataSourceUtils.getConnection(dataSourceTxnManager.getDataSource());
assertNotSame(connectionA, connectionB);
try {
//... do stuff using connectionA
//... do other stuff using connectionB
} finally {
dataSourceTxnManager.commit(txnAStatus);
dataSourceTxnManager.commit(txnBStatus); // results in java.lang.IllegalStateException: Cannot deactivate transaction synchronization - not active
}
}
Sadly, this doesn’t fit at all well with our current transaction API which allows you to create transactions, represented by Java objects, and commit them in any order.
My question:
Am I right in thinking that this LIFO behaviour is fundamental to Spring’s transaction management (even for completely separate transactions)? Or is there a way to tweak its behaviour such that the above test will pass?
I know the proper way would be to use annotations, AOP, etc. but at present our code is not Spring-managed, so it is not really an option for us.
Thanks!
yes,I have met the same problems below when using spring:
java.lang.IllegalStateException: Cannot deactivate transaction synchronization - not active.
According above,Spring’s transaction management requires that the transactions be committed/rolled back in LIFO fashion(stack behavior).The problem disappear.
thanks.
Yes, I found this same behavior in my own application. Only one transaction is "active" at a time, and when you commit/rollback the current transaction, the next active transaction is the next most recently started transaction (LIFO/stack behavior). I wasn't able to find any way to control this, it seems to be built into the Spring Framework.
The following code snippet works fine with SQL Server 2008 (SP1) but with Oracle 11g the call to session.BeginTransaction() throws an exception with the message ‘Connection is already part of a local or a distributed transaction’ (stack trace shown below). Using the '"NHibernate.Driver.OracleDataClientDriver".
Has anyone else run into this?
using (var scope = new TransactionScope())
{
using (var session = sessionFactory.OpenSession())
using (var transaction = session.BeginTransaction())
{
// do what you need to do with the session
transaction.Commit();
}
scope.Complete();
}
Exception at: at NHibernate.Transaction.AdoTransaction.Begin(IsolationLevel isolationLevel)
at NHibernate.Transaction.AdoTransaction.Begin()
at NHibernate.AdoNet.ConnectionManager.BeginTransaction()
at NHibernate.Impl.SessionImpl.BeginTransaction()
at MetraTech.BusinessEntity.DataAccess.Persistence.StandardRepository.SaveInstances(List`1& dataObjects) in S:\MetraTech\BusinessEntity\DataAccess\Persistence\StandardRepository.cs:line 3103
Inner error message was: Connection is already part of a local or a distributed transaction
Inner exception at: at Oracle.DataAccess.Client.OracleConnection.BeginTransaction(IsolationLevel isolationLevel)
at Oracle.DataAccess.Client.OracleConnection.BeginDbTransaction(IsolationLevel isolationLevel)
at System.Data.Common.DbConnection.System.Data.IDbConnection.BeginTransaction()
at NHibernate.Transaction.AdoTransaction.Begin(IsolationLevel isolationLevel)
The problem with using only the transaction scope is outlined here:
NHibernate FlushMode Auto Not Flushing Before Find
It appears nhibernate (v3.1 with oracle dialect and 11g db w/opd.net v2.112.1.2) requires it's own transactions to avoid the flushing issue but I haven't been able to get the transaction scope to work with the nhibernate transactions.
I can't seem to get it to work :(
this might be a defect in nhibernate or odp.net, not sure...
found same problem here:
NHibernate 3.0: TransactionScope and Auto-Flushing
FIXED: found a solution! by putting "enlist=dynamic;" into my oracle connection string, the problem was resolved. I have been able to use both the nhibernate transaction (to fix the flush issue) and the transaction scope like so:
ISessionFactory sessionFactory = CreateSessionFactory();
using (TransactionScope ts = new TransactionScope())
{
using (ISession session = sessionFactory.OpenSession())
using (ITransaction tx = session.BeginTransaction())
{
//do stuff here
tx.Commit();
}
ts.Complete();
}
I checked my log files and found this:
2011-06-27 14:03:59,852 [10] DEBUG NHibernate.Impl.AbstractSessionImpl - enlisted into DTC transaction: Serializable
before any SQL was executed on the connection. I will unit test to confirm proper execution. I'm not too sure what serializable is telling me though
Brads answer, using an outer TransactionScope and an inner NHibernate transaction with enlist=dynamic, doesn't seem to work properly. Ok, the data gets committed.
But if you omit the scope.Complete() or raise an exception after tx.Commit() the data still gets committed (for Oracle)! However, for some reason this works for SQL-Server.
NHibernate transactions take care of auto-flush but in the end they call the underlying ADO.NET transaction. While many sources encourage the above pattern as best practice for NHibernate to solve the auto-flush issue, sources discussing native ADO.NET say the contrary: Do NOT use TransactionScope and inner transactions together, not for Oracle and not for SQL-Server. (See this question and my answer)
My conclusion: Do not combine TransactionScope and NHibernate transactions. To use TransactionScope, skip NHibernate transactions and handle the flushing manually (see also NHibernate Flush doc).
One question, why are you doing the inner session.BeginTransaction - since 2.1 GA NHibernate will automatically enroll into TransactionScope contexts so there's no reason to do your own anymore.
From NHibernate cookbook
Remember that NHibernate requires an NHibernate transaction when interacting with the database. TransactionScope is not a substitute. As illustrated in the next image, the TransactionScope should completely surround both the session and NHibernate transaction. The call to TransactionScope.Complete() should occur after the session has been disposed. Any other order will most likely lead to nasty, production crashing bugs like connection leaks.
My opinion is also that it should work with TransactionScope along, but it does not, neither in 3.3.x.x neither in 4.0.0.400 version.
The recipe above may work, but need to test it with nested TrancactionScope, with inner TransactionScope that has a Transaction.Suppress defined (when using SQL), etc...