JBoss autocommit to Oracle doesnt work always - oracle

I have a very interesting situation. I am slightly new to JBoss and Oracle, having worked mostly with Weblogic on DB2. That said, what I am trying to do is pretty simple.
I have a local-tx-datasource to an Oracle database. From my Java I code, I invoke datasource.getConnection() after retrieving the datasource using the appropriate JNDI name. The local-tx-datasource declaration in my -ds.xml file does not have any explicit reference to autocommit behaviour.
After getting the connection, I execute a create/update query and I get back the correct update count. Subsequently, for a short duration, I am even able to retrieve this record. However, after that the database pretends it never got the record in the first place, and there is nothing at all.
My experience with connections suggests that this happens when the connection does not commit its work, and so only that connection itself will be able to see the data in its transaction. From what I read, JBoss too follows the specification that the Connection returned is an autocommit one. I even verified this from my Java code, and it states the autocommit behaviour is set to true. However, if that was the case, why are my records not getting created / updated?
Following this, I set the Connection's autocommit behaviour to false (again from Java code), and then did the commit explicitly. Since then, there has been no issue.
What could possibly be going wrong? Is my understanding of autocommit here incorrect or does JBoss have some other interpretation of it. Please note, I do not have any transactions at all. These are very simple single record insert queries.

Please note, I do not have any transactions at all.
Wrong assumption. The local-tx-datasource starts a JTA transaction in your behalf. I'm not sure how the autocommit works in this scenario, but I suppose that autocommit applies only when you are using exclusively JDBC transactions, not JTA transactions.
In JTA, if you don't commit a transaction[*], it will be rolled back after the timeout. This explains the scenario that you are experiencing. So, I'd try to either change the local-tx-datasource to no-tx-datasource or to manually commit the transaction.
Note, however, that not managing your transactions is a bad thing. Autocommit should always be avoided. There's no better party to determine when to commit than your application. Leaving this responsibility to the driver/container is, IMO, not very responsible :-)
[*] One exception is for operations inside EJBs, whose business methods are "automatically" wrapped in a JTA transaction. So, you don't need to explicitly commit the transaction.

Related

What happens when spring transaction isolation level conflicts with database transaction isolation level?

As I know database transaction isolation level is a prior, or spring can override it?
If database level has priority what are the cases to use spring isolation configuration?
There is no such separation as a "database transaction isolation level" and a "spring transaction isolation level".
A DB might implement the isolation levels defined by the SQL standard and a client that starts a transaction might request a specific level of isolation for it.
There are a couple of things to note that however do not present any contradiction:
A DB usually has a default isolation level that is used if a client does not explicitly request a specific level for a transaction. Say, in PostgreSQL the default one is Read Committed and in MySQL it's Repeatable Read.
A DB might not implement all of the isolation levels or have some specifics in their implementation. E.g. Oracle DB does not support the Read Uncommitted and Repeatable Read isolation levels and PostgreSQL's Read Uncommitted mode behaves like Read Committed.
With Spring, when you specify an isolation level either via the #Transactional(isolation = ...) annotation or TransactionTemplate#setIsolationLevel() it makes the JDBC driver issue an SQL command to set the desired level for the current session.
E.g. Oracle JDBC driver will do ALTER SESSION SET ISOLATION_LEVEL = READ COMMITTED for Read Committed.
If an unsupported level is specified it'll throw an exception.
Refs:
https://www.postgresql.org/docs/current/transaction-iso.html
https://docs.oracle.com/cd/E25054_01/server.1111/e25789/consist.htm#CNCPT1312

Why roll back transactions in a Spring test environment?

from spring doc:
One common issue in tests that access a real database is their effect
on the state of the persistence store. Even when you use a development
database, changes to the state may affect future tests. Also, many
operations — such as inserting or modifying persistent data — cannot
be performed (or verified) outside of a transaction.
The TestContext framework addresses this issue. By default, the
framework creates and rolls back a transaction for each test. You can
write code that can assume the existence of a transaction. If you call
transactionally proxied objects in your tests, they behave correctly,
according to their configured transactional semantics. In addition, if
a test method deletes the contents of selected tables while running
within the transaction managed for the test, the transaction rolls
back by default, and the database returns to its state prior to
execution of the test. Transactional support is provided to a test by
using a PlatformTransactionManager bean defined in the test’s
application context.
If you want a transaction to commit (unusual, but occasionally useful
when you want a particular test to populate or modify the database),
you can tell the TestContext framework to cause the transaction to
commit instead of roll back by using the #Commit annotation.
How can we be certain that a transactional test was successful if the transaction is rolled back after it? Perhaps the test will be failed as a result of the transaction failing upon commit, for instance because of a violation of a SQL constraint (in relational databases transactions). or, am I missing something?
As per my understanding, by default transaction got roll-back so that state of database remains unchanged.
For cases like unique constraint violation, ideally you should be verifying exception message/code which your application is throwing other than verifying state of transaction in unit tests.
Please note, you don't require to verify if rollback actually rolling back transaction or not, but you need to verify if error is thrown from your application after constraint violation occurred.
So your success criteria in this case is to check error is thrown after trying to insert duplicate record, and to check if error message which you have thrown from your method is correct.
For cases like update/insert to table, you can mark test case with explicit commit & verify it by executing select query within test; that will be your success criteria.

Jpa Repository findAll method which exceptions

I am using JPA Repository. And as you know there are some standard implementations e.g. save, update or also findAll() . I really like jpa but one thing really strikes. Even on the official website there are no hints which Exceptions are getting thrown by these functions. see https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/repository/CrudRepository.html
I do not think findAll() will throw a lot of Exceptions. Of course there will be one if database connection is lost, but there should be no others.
So for any database method there could be an exception so this always has to be handled separately in my service, right ?
No you don't have to handle the Exceptions. Exceptions thrown in the Repository will be RuntimeExceptions and they will automatically rollback the transaction.
That's exactly what you want at this point.
On the other side you have a connection pool that will handle lost connections. So also there is no need for any Exception handling on your side.

How expensive are transactions in Grails?

I'm looking at performance issues with a Grails application, and the suggestion is to remove the transactions from the services.
Is there a way that I can measure the change in the service?
Is there a place that has data on how expensive transactions are? [Time and resource-wise]
If someone told you that removing transactions from your services was a good way to help performance, you should not listen to any future advice from that person. You should look at the time spent in transactions and determine what the real overhead is, and find methods and entire services that are run in transactions but don't need to be and fix those to be nontransactional. But removing all transactions would be irresponsible.
You would be intentionally adding sporadic errors in method return values and making your data inconsistent, and this will get worse when you have a lot of traffic. A somewhat faster but buggy app or web site is not going to be popular, and if this doesn't help performance (or not much) then you still have to do the real work of finding the bottlenecks, missing indexes, and other things that are genuinely causing problems.
I would remove all #Transactional annotations and database writes from all controllers though; not for performance reasons, but to keep the application tiers sensible and not polluted with unrelated code and logic.
If you find one or more service methods that don't require transactions, switch to annotating each transactional method as needed but omit the annotation at class scope so un-annotated methods inherit nothing and aren't transactional. You could also move those methods to non-transactional services.
Note that services are only non-transactional if there are no #Transactional annotations and there is a transactional property disabling the feature:
static transactional = false
If you don't have that property and have no annotations, it will look like it's ok, but transactional defaults to true if not specified.
There's also something else that can help a lot (and already does). The dataSource bean is actually a proxy of a proxy - one proxy returns the connection from the pool that's a being used by an open Hibernate session or transaction so you can see uncommitted data and do your queries and updates in the same connection. The other is more related to your question: org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy which has been in Spring for years but only used in Grails since 2.3. It helps with methods that start or participate in a transaction but do no database work. For the case of a single method call that unnecessarily starts and commits an 'empty' transaction, the overhead involved includes getting the pooled connection, then calling set autocommit false, setting the transaction isolation level, etc. All of these are small costs but they add up. The class works by giving you a proxied connection that caches these method calls, and only gets a real connection and invokes these method on it when a query is actually run. If there are no queries and the only calls are those transaction-related setup methods, there's basically no cost at all. You shouldn't rely on this and should be intentional with the use of #Transactional annotations, but if you miss one this pool proxy will help avoid unnecessary work.

Clojure JDBC transaction not rolling back on BatchUpdateException in HSQL

I'm writing a Clojure program using clojure.java.jdbc. I'm using DBCP to pool connections to HSQL 2.2.8. I have a (transaction) block in which I test if a schema exists, and if not, creates it and a bunch of tables. One of the statements after the schema create (I believe a MERGE statement) throws a BatchUpdateException.
The issue is that the schema create is not rolled back on the BatchUpdateException, even though they're part of the same (transaction) block.
Are there known issues with Clojure JDBC interacting with DBCP or HSQL?
Never mind.
Transactions don't apply to schema changes, apparently. WTF?

Resources