While this particular question has been asked multiple times already, but I am still unsure about it. My set up is something like this: I am using jdbc and have autocommit as false. Let's say I have 3 insert statements, I want to execute as a transaction followed by conn.commit().
sample code:
try {
getConnection()
conn.setAutoCommit(false);
insertStatment() //#1
insertStatment() //#2
insertStatment() //#3, could throw an error
conn.commit()
} catch(Sql exception) {
conn.rollback() // why is it needed?
}
Say I have two scenarios
Either, there won't be any error and we will call conn.commit() and everything will be updated.
Say first two statements work fine but there is a error in the third one. So conn.commit() is not called and our database is in consistent state. So why do I need to call conn.rollback()?
I noticed that some people mentioned that rollback has an impact in case of connection pooling? Could anyone explain to me, how it will affect?
A rollback() is still necessary. Not committing or rolling back a transaction might keep resources in use on the database (transaction handles, logs or record versions etc). Explicitly committing or rolling back makes sure those resources are released.
Not doing an explicit rollback may also have bad effects when you continue to use the connection and then commit. Changes successfully completed in the transaction (#1 and #2 in your example) will be persisted.
The Connection apidoc however does say "If auto-commit mode has been disabled, the method commit must be called explicitly in order to commit changes; otherwise, database changes will not be saved." which should be interpreted as: a Connection.close() causes a rollback. However I believe there have been JDBC driver implementations that used to commit on connection close.
The impact on connection pooling should not exist for correct implementations. Closing the logical connection obtained from the connection pool should have the same effect as closing a physical connections: an open transaction should be rolled back. However sometimes connection pools are not correctly implemented or have bugs or take shortcuts for performance reasons, all of which could lead to an open transaction being already started when you get handed a logical connection from a pool.
Therefor: be explicit in calling rollback.
Related
I have multiple stacked transactions created by calling stacked methods with:
#Transactional(propagation = Propagation.REQUIRES_NEW)
so the result is transaction waiting for new transaction waiting for new transaction...
Do each of these transactions use a separate db connection from the connection pool, possibly starving the pool?
P.S.: I know that I shouldn't stack new transactions due to errors not rolling back all transactions, but I'm curious about the behaviour.
Yes, when you are using REQUIRES_NEW you will get a new transaction for every method call . New transaction means, new database connection from the pool is being used.
And yes, that means potentially starving it.
You might enjoy this database transactions book for more detailed information including lots of code examples: https://www.marcobehler.com/books/1-java-database-connections-transactions
If we add afterUpdate event code on a domain object (e.g. our Session object) in grails:
Is it called after the update has been committed, or after it is flushed, or other?
If the update failed (e.g. constraint, or optimistic lock fail), will the after event still be called?
Will afterUpdate be in the same transaction as the update?
will the commit of the service method which did the update wait till the afterUpdate method is finished, and, if so, is there any way round this (except creating a new thread)?
We have a number of instances of our grails application running on mutliple tomcats. Each has a session expiry quartz job to expire our session (domain object)
The job basically says getAllSession with lastUpdated > xxx, then loops through them calling session.close(Session.Expired)
Session.close just sets the session.status to Expired.
In theory, the same session could be closed twice at the same time buy the job running on two servers, but this doesn't matter (yet)
Now we want to auto cashout customers with expired (or killed) sessions. The cashout process entails making calls to external payment systems, which can take up to 1 minute, and may fail (but should not stop the session from being closed, or 'lock' other sessions)
If we used the afterUpdate on the session domain object, we can check the session.status, and fire of the cashout, either outside of the transaction, or in another thread (e.g. using Executors). But this is very risky - as we dont know the exact behaviour. E.g. if the update failed, would it still try to execute the afterUpdate call? We assume so, as we are guessing the commit wont happen till later.
The other unknown is how calling save and commit works with optimistic locking. E.g. if you call save(flush=true), and you dont get an error back, are you guaranteed that the commit will work (baring the db falling over), or are there scenarios where this can fail?
Is it called after the update has been committed, or after it is
flushed, or other?
After update has been made, but transaction has not been committed yet. So if an exception occurs inside afterUpdate, transaction will be rolled back.
If the update failed (e.g. constraint, or optimistic lock fail), will the after event still be called?
No
Will after Update be in the same transaction as the update?
Yes
Will the commit of the service method which did the update wait till the afterUpdate method is finished, and, if so, is there any way
round this (except creating a new thread)?
No easy way around
Is there any scenario in which Oracle won't throw ora-00060 (Deadlock detected while waiting for resource) error on deadlock in db?
I have an issue of app freezing on multithreaded record inserts and I wonder what might have caused that
I'm not sure what other relevant information I can provide so please ask in case of any doubts.
At times I had the felt that Oracle had not thrown deadlock errors for every single time it actually occured. Having said that, if you are experiencing locks with multithreaded inserts, it is more likely that the sessions are temporarily waiting on each other than truly deadlocking.
To find out for sure you can query the v$session paying particular attention to the STATUS, BLOCKING_SESSION, WAIT_CLASS, EVENT, P1TEXT,P2TEXT and P3TEXT. That should paint the picture in terms of sessions holding each other and why. A true deadlock would have session A blocking session B and session B blocking session A, which is relatively rare.
There is also a chance that the application is hanging due to some multithreading mishap, not a database one.
Does driver commits on close() by default even if conn.setAutoCommit() is set to false?
I have checked it for insert query, and yes it does. Let me know if I'm wrong.
When a Connection is closed, it needs to either rollback or commit the current transaction. IIRC, the JDBC specification allows an implementation to choose either as long as it is consistent in its behavior (always commit or always rollback on close). So yes, the behavior is allowed and so it is correct.
If it is the best choice is debatable: you can argue that committing on close makes sure no information is lost, on the other hand you didn't explicitly commit, so maybe you didn't want the info to be persisted.
when working with autocommit=false, just do explicitly rollback before closing connection. This way, all updates become reverted unless explicitly committed.
I have a WCF service that uses ODP.NET to read data from an Oracle database. The service also writes to the database, but indirectly, as all updates and inserts are achieved through an older layer of business logic that I access via COM+, which I wrap in a TransactionScope. The older layer connects to Oracle via ODBC, not ODP.NET.
The problem I have is that because Oracle uses a two-phase-commit, and because the older business layer is using ODBC and not ODP.NET, the transaction sometimes returns on the TransactionScope.Commit() before the data is actually available for reads from the service layer.
I see a similar post about a Java user having trouble like this as well on Stack Overflow.
A representative from Oracle posted that there isn't much I can do about this problem:
This maybe due to the way OLETx
ITransaction::Commit() method behaves.
After phase 1 of the 2PC (i.e. the
prepare phase) if all is successful,
commit can return even if the resource
managers haven't actually committed.
After all the successful "prepare" is
a guarantee that the resource managers
cannot arbitrarily abort after this
point. Thus even though a resource
manager couldn't commit because it
didn't receive a "commit" notification
from the MSDTC (due to say a
communication failure), the
component's commit request returns
successfully. If you select rows from
the table(s) immediately you may
sometimes see the actual commit occur
in the database after you have already
executed your select. Your select will
not therefore see the new rows due to
consistent read semantics. There is
nothing we can do about this in Oracle
as the "commit success after
successful phase 1" optimization is
part of the MSDTC's implementation.
So, my question is this:
How should I go about dealing with the possible delay ("asyc" via the title) problem of figuring out when the second part of the 2PC actually occurs, so I can be sure that data I inserted (indirectly) is actually available to be selected after the Commit() call returns?
How do big systems deal with the fact that the data might not be ready for reading immediately?
I assume that the whole transaction has prepared and a commit outcome decided by the TransactionManager, therefore eventually (barring heuristic damage) the Resource Managers will receive their commit message and complete. However, there are no guarantees as to how long that might take - could be days, no timeouts apply, having voted "commit" in the Prepare the Resource Manager must wait to hear the collective outcome.
Under these conditions, the simplest approach is to take "an understood, we're thinking" approach. Your request has been understood, but you actually don't know the outcome, and that's what you tell the user. Yes, in all sane circumstances the request will complete, but under some conditions operators could actually choose to intervene in the transaction manually (and maybe cause heuristic damage in doing so.)
To go one step further, you could start a new transaction and perform some queries to see if the data is there. Now, if you are populating a result screen you will naturally be doing such as query. The question would be what to do if the expected results are not there. So again, tell the user "your recent request is being processed, hit refresh to see if it's complete". Or retry automatically (I don't much like auto retry - prefer to educate the user that it's effectively an asynch operation.)