Preconditions
Hikari connection pool
Oracle database
Default isolation level (read-commited)
setAutocomplete(false)
Our connection from the pool has already been used for some queries (it's not new)
The question
When does the next transaction start?
right after the previous commit
on any first request after the previous commit (even if it's a select statement)
on the first change request after the previous commit (like, update, delete, insert)
something else ?
An Oracle database transaction starts with the SQL DML command (insert/update/delete) and ends with a commit or a rollback (which rolls back to the most recent commit). A select may read the result of a transaction - committed or uncommitted - but is not part of the transaction itself. There is nothing to do in Java to explicitly open or close a transaction; the transaction is defined only within a session and SQL commands in the database.
Related
In a particularly requested DB2 table, accessed by distributed Java desktop applications via JDBC, I'm getting the following scenario several times a day:
Client A wants to INSERT new registers and gets a IX lock on the table, and X locks in each new row;
Other client(s) want(s) to perform a SELECT, is granted a IS lock on the table, but the application stucks;
Client A continues to work, but the INSERT and UPDATE queries are not commited, the locks are not released, and it keeps collecting X locks to each row;
Client A exits and its work is not committed. The other clients finnally get their SELECT result set.
Used to work well, and it does most of the time, but the lock situations are getting more and more frequent.
Auto-commit is ON.
There are no exceptions thrown or errors detected in the logs.
DB2 9.5 / JDBC Driver 9.1 (JDBC 3 specification)
If the jdbc applications are not performing COMMIT then the locks will persist until a rollback or commit. If an application quits with uncommitted inserts then a rollback will happen for all recent versions of Db2. This is expected behaviour for Db2 on Linux/Unix/Windows.
If the jdbc application is failing to commit then it is broken or misconfigured so you must get to root cause of that if you seek a permanent solution.
If the other clients wish to ignore the insert row-locks then they should choose the correct isolation level and you can configure Db2 to skip insert-locks . See documentation DB2_SKIPINSERTED at this link
It turns out that sometimes the auto-commit, and I don't know why, becomes off to a random single instance of the application.
The following validation seems to solve the problem (but not the root of it):
if (!conn.getAutoCommit()) {
conn.commit();
}
After my app tries committing many transactions after several mins, I'm getting the following exception:
could not commit jdbc transaction nested exception is
java.sql.sqlexception: jz006: caught ioexception:
java.net.SocketTimeoutException: Read timed out..."
I'm using Sybase with the JDBC 4 driver with Spring JDBC, and I found this link: http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc39001.0707/html/prjdbc0707/prjdbc070714.htm
Could I just use any of the following:
SESSION_TIMEOUT
DEFAULT_QUERY_ TIMEOUT
INTERNAL_QUERY_TIMEOUT
One idea is to make the transactions in batch, but I have no time to develop that.
What options are there to avoid getting that error?
Check if your processes are blocking each other when they execute (or ask your DBA if you're not sure how to check). Depending upon the connection properties (specifically autocommit being set to off) you may not actually be committing each transaction fully before the next one is attempted and they may block each other if you're using a connection pool with multiple threads. Talk to your DBA and check the table's locking scheme as for example if its set to allpages locking, you will hold locks at the page rather than row-level of data. You can also check this yourself via sp_help . Some more info regarding the various types of locking scheme can be found at http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.dc20021_1251/html/locking/X25549.htm (old version but still valid on current versions).
You can check for locks via sp_who, sp_lock or against the system tables directly (select spid, blocked from master..sysprocesses where blocked !=0 is a very simple one to get the process and blocking process you can add more columns to this as required).
You should also ask your DBA to check that the transactions are optimal as for example a tablescan on an update could well lock out the whole table to other transactions and would lead to the timeout issues you're seeing here.
I am trying to use HikarCP in a legacy system. I configured autocommit to false, which is what we want, and realized that my logs are filled with
[c.z.h.p.ProxyConnection][ProxyConnection.java:232] ora - Executed rollback on connection net.sf.log4jdbc.ConnectionSpy#3f2bba67 due to dirty commit state on close().
This is happening when a connection acquired from the pool is closed after issuing a finder query. No insert/update/delete's are happening within the life of connection. Is this how it should be for select queries? Should I be doing a COMMIT after each select?
Yes, you should be committing. Even SELECT queries initiate transactions and acquire locks. Particularly with various isolation levels, and depending on the database even with TRANSACTION_READ_COMMITTED.
HikariCP treats a non-explicit commit when autocommit is false as an application error. Some other pools support configuring "commit-on-close", but HikariCP considers that risky, and a hack to support applications that were never properly written.
The JDBC specification is explicitly silent on whether a Connection without auto commit should automatically commit or rollback. That is an implementation detail left up the the driver developers.
I have written script for oracle backup and restore using RMAN.
Note i took backup database + archive logs
Now, I did some sql statement in oracle but not commited transaction then it may be somewhere in redo logs i am not sure about it.
Now, In above situation i took backup database + archive log and did restore.
Non-commited data was not present.
I am confuse about this scenario, Does this scenario is correct or it is missing my data or i missed somewhere.
This is perfectly fine. Your transaction is in fact at redo. But since you didn't commit it the recover process rolled it back after reapplying it because it couldn't find a commit statement at the end of the redo stream. This is by design. The opposite would be a problem, if you had committed a statement, no matter what happened with the server (power loss, crashed) you should be able to see it after restoring the server and applying all of redo/archives.
The reason for that is that once you commit, all of the work to reexecute your transaction should be stored at disk (redo log file). There are other types of commit (COMMIT WRITE NOWAIT, for example) that bypass this behaviour and should be avoided.
Hope this helps.
My app to recovers automatically from failures. I test it as follows:
Start app
In the middle of processing, kill the application server host (shutdown -r -f)
On host reboot, application server restarts (as a windows service)
Application restarts
Application tries to process, but is blocked by incomplete 2-phase commit transaction in Oracle DB from previous session.
Somewhere between 10 and 30 minutes later the DB resolves the prior txn and processing continues OK.
I need it to continue processing faster than this. My DBA advises that I should prefix my statement with
ALTER SESSION ADVISE COMMIT;
But he can't give me guarantees or details about the potential for data loss doing this.
Luckily the statement in question is simply updating a datetime value to SYSDATE every second or so, so if there was some data corruption it would last < 1 second before it was overwritten.
But, to my question. What exactly does the statement above do? How does Oracle resolve data synchronisation issues when it is used?
Can you clarify the role of the 'local' and 'remote' databases in your scenario.
Generally a multi-db transaction does the following
Starts the transaction
Makes a change on on database
Makes a change on the other database
Gets the other database to 'promise to commit'
Commits locally
Gets the remote db to commit
In doubt transactions happen if step 4 is completed and then something fails. The general practice is to get the remote database back up and confirm if it committed. If so, step (5) goes ahead. If the remote component of the transaction can't be committed, the local component is rolled back.
Your description seems to refer to an app server failure which is a different kettle of fish. In your case, I think the scenario is as follows :
App server takes a connection and starts a transaction
App server dies without committing
App server restarts and make a new database connection
App server starts a new transaction on the new connection
New transaction get 'stuck' waiting for a lock held by the old connection/transaction
After 20 minutes, dead connection is terminated and transaction rolled back
New transaction then continues
In which case the solution is to kill off the old connection quicker, with a shorter timeout (eg SQLNET_EXPIRE_TIME in the sqlnet.ora of the server) or a manual ALTER SYSTEM KILL SESSION.