What happens when spring transaction isolation level conflicts with database transaction isolation level? - spring

As I know database transaction isolation level is a prior, or spring can override it?
If database level has priority what are the cases to use spring isolation configuration?

There is no such separation as a "database transaction isolation level" and a "spring transaction isolation level".
A DB might implement the isolation levels defined by the SQL standard and a client that starts a transaction might request a specific level of isolation for it.
There are a couple of things to note that however do not present any contradiction:
A DB usually has a default isolation level that is used if a client does not explicitly request a specific level for a transaction. Say, in PostgreSQL the default one is Read Committed and in MySQL it's Repeatable Read.
A DB might not implement all of the isolation levels or have some specifics in their implementation. E.g. Oracle DB does not support the Read Uncommitted and Repeatable Read isolation levels and PostgreSQL's Read Uncommitted mode behaves like Read Committed.
With Spring, when you specify an isolation level either via the #Transactional(isolation = ...) annotation or TransactionTemplate#setIsolationLevel() it makes the JDBC driver issue an SQL command to set the desired level for the current session.
E.g. Oracle JDBC driver will do ALTER SESSION SET ISOLATION_LEVEL = READ COMMITTED for Read Committed.
If an unsupported level is specified it'll throw an exception.
Refs:
https://www.postgresql.org/docs/current/transaction-iso.html
https://docs.oracle.com/cd/E25054_01/server.1111/e25789/consist.htm#CNCPT1312

Related

jdbc is not able to find data flushed by jpa repository in the same transaction

I have a JUnit 5 testcase which is annotated with #Transactional and it is calling save service (which uses JPA saveandFlush) first, and trying to retrieve the same data/entity by using find service, which is using plain JDBC for searching, but it is not able to find that entity.
When I tried using isolation of the transaction as read_uncommitted it threw exception saying "java.sql.SQLException: READ_COMMITTED and SERIALIZABLE are the only valid transaction levels" please note that I am using Oracle database.
Is there any other way we can read the data which is present in same transaction by JDBC code?
Oracle does not implement Read Uncommited isolation level, so you will not be able to see uncommited changes from other sessions.
As Mark Rotteveel and Marmite Bomber said, two different transactions/connections(JPA and JDBC) can not see each other's uncommitted data, specially for application having Oracle database.
I had to use spring managed jdbc template so that JPA and the JDBC template uses the same transaction and data is visible to each other.

How to select PROPAGATION in spring transaction?

I am just reading the spring-mybatis.xml,here are some code of transaction-manager:
I want to know why some methods defines as "REQUIRED" or "SUPPORTS"?How to think about it and decide which to choose?
Your question is, I think it's about Spring transaction and it's depend on your business logic and how you can control spring transaction.
To understand the Spring transaction "REQUIRED" or "SUPPORTS", you need to understand spring transaction definition. This transaction definition types are come from org.springframework.transaction.TransactionDefinition class. But first you need to understand 1)Spring transaction types and then 2)Spring transaction Definition.
1) Spring supports two types of transaction management:
Programmatic transaction management: This means that you have manage the transaction with the help of programming. That gives you extreme flexibility, but it is difficult to maintain.
Declarative transaction management: This means you separate transaction management from the business code. You only use annotations or XML based configuration to manage the transactions.
2) Spring Transaction Definition
PROPAGATION_REQUIRED:
Spring REQUIRED behavior means that the same transaction will be used if there is an already opened transaction in the current bean method execution context. Create a new one if none exists.
In short this means that if an inner(2nd Transaction) method causes a transaction to rollback, the outer(1st Transaction) method will fail to commit and will also rollback the transaction.
PROPAGATION_SUPPORTS:
Support a current transaction; execute non-transactionally if none exists.
Understandanding these "REQUIRE" "SUPPORTS" is not enough, as I mentioned to you that you need to understand all Spring definition under org.springframework.transaction.TransactionDefinition class.
Unfortunately, I had one power point about this Spring types and transaction which I wrote at Dec 2014 in slideshare website.
Spring Transaction Management
In this slide, I added very important point about Spring transaction in power point note session. So please not only refer to slide content but also refer to slide note session. Hope it help.
Example, refer as power point notes session too for more understanding about Spring transaction definition.
Edited:
Propagation Means: Typically, all code executed within a transaction scope will run in that transaction. However, you have the option of specifying the behavior in the event that a transactional method is executed when a transaction context already exists. For example, code can continue running in the existing transaction (the common case); or the existing transaction can be suspended and a new transaction created. Spring offers all of the transaction propagation options familiar from EJB CMT. To read about the semantics of transaction propagation in Spring, see Transaction Propagation

Cache Isolation Level Warning on Parent Entity

After adding a second persistence unit and changing my application's data sources to XADataSource (MySQL), I'm now getting a confusing warning in the glassfish log about isolation levels on my parent entity:
WARN o.e.p.s.f.j.ejb_or_metadata : Parent Entity BaseEntity has an isolation
level of: PROTECTED which is more protective then the subclass Contact with
isolation: null so the subclass has been set to the isolation level PROTECTED.
After some research, I think that this isolation level warning message is coming from EclipseLink's caching mechanism. But I am not specifying an isolation level anywhere in my app, so it appears that something in my configuration has triggered the BaseEntity class to have an isolation level of 'PROTECTED'. The documentation is silent on what might cause it to be automatically assigned to that level -- see user guide.
Minor testing with a single user has shown that the application seems to work as expected, but this warning message doesn't make me feel comfortable rolling it out to the masses.
Can anyone shed some light into this message? Are my concerns valid?
The cache implementation here is just trying to sync the isolation level of parnet and child entity. But i think you should override the default protective isolation level. Because "Serializeable" isolation level is the one which is most protective and has poor performance. You can use Read Committed or Repeatable Read levels, depending upon your requirements.
This is just a warning about cache isolation, it has nothing to due with database isolation, so you can just ignore it.
For more info on cache isolation see,
http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Caching/Shared_and_Isolated
It is odd if you have not done any caching configuration though. By default everything should be SHARED, to get something as PROTECTED you must have disabled aching for a related entity, such as using #Cacheable(false)?
After some research, I discovered that this warning had nothing to do with using the XADataSource. I had earlier began some exploration into EclipseLink's Multitenancy, and it turned out that this was the culprit.
Referring to http://wiki.eclipse.org/EclipseLink/Examples/JPA/Multitenant#Persistence_Usage_for_Multiple_Tenants:
When using this architecture there is a shared cache available for regular entity types but the Multitenant types must be PROTECTED in the cache so the MULTITENANT_SHARED_EMF property must be set to true.
FYI -- In reviewing the code, there are 3 other cases in ClassDescriptor.initializeCaching() in which the cache isolation is downgraded to PROTECTED:
If the entity has a DatabaseMapping marking it as non-cacheable.
If the entity has a ForeignReferenceMapping that doesn't have an isolation level of shared.
If the entity has a AggregateObjectMapping that doesn't have an isolation level of shared.

Distributed Ehcache working using JTA

I am trying to do some bench-marking of Distributed transaction memory using Terracotta Ehcache (Open Source). I am having some problem understanding its working with JTA. In code I have found that cache interested in a Distributed Transaction enlist itself as a resource with JTA on which JTA later executes the two phase commit.
My question is if only one cache is enlisted as a resource, how JTA will be able to update all other caches atomically in distributed settings? We are not passing other caches reference to JTA, so atomically update will not be done on them. I feel, I am missing some string here, can anyone explain how it works? I am new to J2EE too, am I missing some J2EE concept which allow automatic reference passing of other caches to JTA?
Ehcache, if configured that way (transactionalMode="xa" or transactionalMode="xa_strict"), can work as a full XAResource to participate in JTA transactions (global transactions), which is controlled by a Transaction Manager (from your application server or some standalone product). Ehcache itself takes care of registering at the Transaction Manager in this case. Also, as a full XAResource, multiple caches can be registered and can be part of a transaction.
Strong, cluster-wide consistency is quite expensive (there is no free lunch). It boils down using locks and synchronous (network) operations involving waits, acknowledging etc.
For a more detailed read, I'd suggest you to consult Ehcache docs: Transactions in Ehcache
Each cache configured with transactionalMode="xa_strict" and updated within a JTA transaction will register itself as an XAResource to the transaction manager. This is all automated and transparent to the end user, you don't have anything special at all to do for this mechanism to kick in, you just have to use your cache inside a JTA beg
If you also happen to access other, non-transactional caches in the JTA transaction context, those won't be part of the transaction and will simply be updated as soon as you modify them.
Tried xa_strict but there seems to be no automatic enlistment of the XAResource? Switching to plain xa works though...

JBoss autocommit to Oracle doesnt work always

I have a very interesting situation. I am slightly new to JBoss and Oracle, having worked mostly with Weblogic on DB2. That said, what I am trying to do is pretty simple.
I have a local-tx-datasource to an Oracle database. From my Java I code, I invoke datasource.getConnection() after retrieving the datasource using the appropriate JNDI name. The local-tx-datasource declaration in my -ds.xml file does not have any explicit reference to autocommit behaviour.
After getting the connection, I execute a create/update query and I get back the correct update count. Subsequently, for a short duration, I am even able to retrieve this record. However, after that the database pretends it never got the record in the first place, and there is nothing at all.
My experience with connections suggests that this happens when the connection does not commit its work, and so only that connection itself will be able to see the data in its transaction. From what I read, JBoss too follows the specification that the Connection returned is an autocommit one. I even verified this from my Java code, and it states the autocommit behaviour is set to true. However, if that was the case, why are my records not getting created / updated?
Following this, I set the Connection's autocommit behaviour to false (again from Java code), and then did the commit explicitly. Since then, there has been no issue.
What could possibly be going wrong? Is my understanding of autocommit here incorrect or does JBoss have some other interpretation of it. Please note, I do not have any transactions at all. These are very simple single record insert queries.
Please note, I do not have any transactions at all.
Wrong assumption. The local-tx-datasource starts a JTA transaction in your behalf. I'm not sure how the autocommit works in this scenario, but I suppose that autocommit applies only when you are using exclusively JDBC transactions, not JTA transactions.
In JTA, if you don't commit a transaction[*], it will be rolled back after the timeout. This explains the scenario that you are experiencing. So, I'd try to either change the local-tx-datasource to no-tx-datasource or to manually commit the transaction.
Note, however, that not managing your transactions is a bad thing. Autocommit should always be avoided. There's no better party to determine when to commit than your application. Leaving this responsibility to the driver/container is, IMO, not very responsible :-)
[*] One exception is for operations inside EJBs, whose business methods are "automatically" wrapped in a JTA transaction. So, you don't need to explicitly commit the transaction.

Resources