I have multiple stacked transactions created by calling stacked methods with:
#Transactional(propagation = Propagation.REQUIRES_NEW)
so the result is transaction waiting for new transaction waiting for new transaction...
Do each of these transactions use a separate db connection from the connection pool, possibly starving the pool?
P.S.: I know that I shouldn't stack new transactions due to errors not rolling back all transactions, but I'm curious about the behaviour.
Yes, when you are using REQUIRES_NEW you will get a new transaction for every method call . New transaction means, new database connection from the pool is being used.
And yes, that means potentially starving it.
You might enjoy this database transactions book for more detailed information including lots of code examples: https://www.marcobehler.com/books/1-java-database-connections-transactions
Related
How can I keep the persistence context small in a Spring JPA environment?
Why: I know that by keeping the persistence context small, there will be a significant performance boost!
The main problem area is:
#Transactional
void MethodA() {
WHILE retrieving next object of 51M (via a stateless session connection) DO
get some further (readonly) data
IF condition holds THEN
assessment = retrieve assession object (= record from database)
change assessment data
save the assessment to the database
}
Via experiments in this problem domain I know that when cleaning the persistence context every 250 iterations, then the performance will be a lot better.
When I add these lines to the code, so every 250 iterations:
#PersistenceContext
private EntityManager em;
WHILE ...
...
IF counter++ % 250 == 0 THEN
em.flush()
em.clear()
}
Then I get errors like "cannot reliably perform the flush operation".
I tried to make the main Transactional read-only and the asssessment-save part 'Transaction-requires-new', then I get errors like 'operating on a detached entity'. Very strange, because I never revisit an open entity.
So, how can I keep the persistence context small?
Have tried 10s of ways. Help is really appreciated.
I would suggest you move all the condition logic into your query so that you don't even have to load that many rows/objects. Or even better, write an update query that does all of that in a single transaction so you don't need to transfer any data at all between your application and database.
I don't think that flushing is necessary with a stateless session as it doesn't keep state i.e. it flushes and clears the persistence context after every operation, but apart from that, I also think this this might not be what you really want as that could lead to re-fetching of data.
If you don't want the persistence context to fill up, then use DTOs for fetching the data and execute update statements to flush the changes.
I am trying to create an API in Spring Boot using #Transactional annotation regarding bank fund transfer.
Now I want to know - if there are multiple calls to the same API at the same time, how to manage transaction between them. Suppose for example transferBalance method is called by Transaction X which transfers funds from accounts A to B and another transaction Transaction Y transfer funds from B to C. Both transactions occur at the same time. How would these transactions be handled? What propagation should it have and also what about the isolation?
Check this below changes: for your case check bold description below.
if more than one transaction can also go with SERIALIZED
Isolation level defines how the changes made to some data repository by one transaction affect other simultaneous concurrent transactions, and also how and when that changed data becomes available to other transactions. When we define a transaction using the Spring framework we are also able to configure in which isolation level that same transaction will be executed.
#Transactional(isolation=Isolation.READ_COMMITTED)
public void someTransactionalMethod(Object obj) {
}
READ_UNCOMMITTED isolation level states that a transaction may read data that is still uncommitted by other transactions.
READ_COMMITTED isolation level states that a transaction can't read data that is not yet committed by other transactions.
REPEATABLE_READ isolation level states that if a transaction reads one record from the database multiple times the result of all those reading operations must always be the same.
SERIALIZABLE isolation level is the most restrictive of all isolation levels. Transactions are executed with locking at all levels (read, range and write locking) so they appear as if they were executed in a serialized way.
Your doubt has nothing to do with #Transactional.
Its simple question of concurrency.
Actually both the transaction, form a to b and form b to c can work concurrently.
By putting #Transactional states something like
works out whether a given exception should cause transaction rollback by applying a number of rollback rules, both positive and negative. If no rules are relevant to the exception, it behaves like DefaultTransactionAttribute (rolling back on runtime exceptions).
While this particular question has been asked multiple times already, but I am still unsure about it. My set up is something like this: I am using jdbc and have autocommit as false. Let's say I have 3 insert statements, I want to execute as a transaction followed by conn.commit().
sample code:
try {
getConnection()
conn.setAutoCommit(false);
insertStatment() //#1
insertStatment() //#2
insertStatment() //#3, could throw an error
conn.commit()
} catch(Sql exception) {
conn.rollback() // why is it needed?
}
Say I have two scenarios
Either, there won't be any error and we will call conn.commit() and everything will be updated.
Say first two statements work fine but there is a error in the third one. So conn.commit() is not called and our database is in consistent state. So why do I need to call conn.rollback()?
I noticed that some people mentioned that rollback has an impact in case of connection pooling? Could anyone explain to me, how it will affect?
A rollback() is still necessary. Not committing or rolling back a transaction might keep resources in use on the database (transaction handles, logs or record versions etc). Explicitly committing or rolling back makes sure those resources are released.
Not doing an explicit rollback may also have bad effects when you continue to use the connection and then commit. Changes successfully completed in the transaction (#1 and #2 in your example) will be persisted.
The Connection apidoc however does say "If auto-commit mode has been disabled, the method commit must be called explicitly in order to commit changes; otherwise, database changes will not be saved." which should be interpreted as: a Connection.close() causes a rollback. However I believe there have been JDBC driver implementations that used to commit on connection close.
The impact on connection pooling should not exist for correct implementations. Closing the logical connection obtained from the connection pool should have the same effect as closing a physical connections: an open transaction should be rolled back. However sometimes connection pools are not correctly implemented or have bugs or take shortcuts for performance reasons, all of which could lead to an open transaction being already started when you get handed a logical connection from a pool.
Therefor: be explicit in calling rollback.
Application server creates a new transaction before calling MDB's onMessage method. Also I am processing database update in onMessage method. Transactions create additional overhead and processing several message in one transaction could increase performance.
Is it possible to make App server to use one transaction for several messages. Or maybe there are other approaches to this problem?
And, by the way, I can't use multiple instances, cause I need to preserve the sequence order.
I guess you can store the messages in a list and depending upon how many messages you want to process in one transaction you can check the size of the list and process the messages.
Just a little background , I'm a new developer who has recently taken over a major project after the senior developer left the company before I could develop a full understanding of how he structured this. I'll try to explain my issue the best I can.
This application creates several MessageListner threads to read objects from JMS queues. Once the object is received the data is manipulated based on some business logic and then mapped to a persistence object to be saved to an oracle database using a hibernate EntityManager.
Up until a few weeks ago there hasn't been any major issues with this configuration in the last year or so since I joined the project. But for one of the queues (the issue is isolated to this particular queue), the spring managed bean that processes the received object hangs at the method below. My debugging has led me to conclude that it has completed everything within the method but hangs upon completion. After weeks of trying to resolve this I'm at end of my rope with this issue. Any help with this would be greatly appreciated.
Since each MessageListner gets its own processor, this hanging method only affects the incoming data on one queue.
#Transactional(propagation = Propagation.REQUIRES_NEW , timeout = 180)
public void update(UserRelatedData userData, User user,Company company,...)
{
...
....
//business logic performed on user object
....
......
entityMgr.persist(user);
//business logic performed on userData object
...
....
entityMgr.persist(userData);
...
....
entityMgr.flush();
}
I inserted debug statements just to walk through the method and it completes everything including entityMgr.flush.().
REQUIRES_NEW may hang in test context because the transaction manager used in unit testing doesn't support nested transactions...
From the Javadoc of JpaTransactionManager:
* <p>This transaction manager supports nested transactions via JDBC 3.0 Savepoints.
* The {#link #setNestedTransactionAllowed "nestedTransactionAllowed"} flag defaults
* to {#code false} though, since nested transactions will just apply to the JDBC
* Connection, not to the JPA EntityManager and its cached entity objects and related
* context. You can manually set the flag to {#code true} if you want to use nested
* transactions for JDBC access code which participates in JPA transactions (provided
* that your JDBC driver supports Savepoints). <i>Note that JPA itself does not support
* nested transactions! Hence, do not expect JPA access code to semantically
* participate in a nested transaction.</i>
So clearly if you don't call (#Java config) or set the equivalent flag in your XML config:
txManager.setNestedTransactionAllowed(true);
or if your driver doesn't support Savepoints, it's "normal" to get problem with REQUIRES_NEW...
(Some may prefer an exception "nested transactions not supported")
This kind of problems can show up when underlying database has locks from uncommitted changes.
What I would suspect is some other code made inserts/deletes on userData table(s) outside transaction or in a transaction which takes very long time to execute since it's a batch job or similar. You should analyze all the code referring to these tables and look for missing #Transactional.
Beside this answer, you may also check for the isolation level of your transaction — perhaps it's too restrictive.
Does the update() method hang forever, or does it throw an exception when the timeout elapses?
Unfortunately I have the same problem with Propagation.REQUIRES_NEW. Removing it resolves the problem. The debugger shows me that the commit method is hanging (invoked from #Transactional aspect implementation).
The problem appears only in the test spring context, when the application is deployed to the application server it works fine.