Rolling back multiple transactions with JDBC - jdbc

Is it possible to rollback multiple already-commited transactions with JDBC?
According to this link here: http://docs.oracle.com/javase/tutorial/jdbc/basics/transactions.html savepoints are only active for the current transaction?
Thanks.

Already committed individual or multiple transactions (unlike savepoints!) are not possible on any databases as far as I know, definitely not on Oracle. Yes, savepoints are relevant only for the current transaction.
I'm not sure what your problem is but if you want to look at old values of a recently committed table you could use SELECT AS OF or similarly, flashback the whole table or even the database.
If you think about it for a while there are lots of constrains while individual transactional rollbacks are sometimes logically impossible without violating a whole lot of data integrity rules...

Related

Can we persist two different table entity in DynamoDB under one single transaction

I have two tables in Amazon DynamoDB where I have to persist data in a single transaction Using spring boot. if the persistence fails in the second table it should rollback for the first table also.
I have tried looking into AWSLAB-amazon DynamoDB transaction but it only helps for a single table.
Try using the built-in DynamoDB transactions capability. From the limited information you give, it should do what you are looking for across regional tables. Just keep in mind that there is no rollback per se. Either all items in a transaction work or none of them. The internal transaction coordinator handles that for you though.
Now that this feature is out, you should not be looking at the AWSlabs tool most likely.

Commits in the absence of locks in CockroachDB

I'm trying to understand how ACID in CockroachDB works without locks, from an application programmer's point of view. Would like to use it for an accounting / ERP application.
When two users update the same database field (e.g. a general ledger account total field) at the same time what does CockroachDB do? Assuming each is updating many other non-overlapping fields at the same time as part of the respective transactions.
Will the aborted application's commit process be informed about this immediately at the time of the commit?
Do we need to take care of additional possibilities than, for example, in ACID/locking PostgreSQL when we write the database access code in our application?
Or is writing code for accessing CockroachDB for all practical purposes the same as for accessing a standard RDBMS with respect to commits and in general.
Of course, ignoring performance issues / joins, etc.
I'm trying to understand how ACID in CockroachDB works without locks, from an application programmer's point of view. Would like to use it for an accounting / ERP application.
CockroachDB does have locks, but uses different terminology. Some of the existing documentation that talks about optimistic concurrency control is currently being updated.
When two users update the same database field (e.g. a general ledger account total field) at the same time what does CockroachDB do? Assuming each is updating many other non-overlapping fields at the same time as part of the respective transactions.
One of the transactions will block waiting for the other to commit. If a deadlock between the transactions is detected, one of the two transactions involved in the deadlock will be aborted.
Will the aborted application's commit process be informed about this immediately at the time of the commit?
Yes.
Do we need to take care of additional possibilities than, for example, in ACID/locking PostgreSQL when we write the database access code in our application?
Or is writing code for accessing CockroachDB for all practical purposes the same as for accessing a standard RDBMS with respect to commits and in general.
At a high-level there is nothing additional for you to do. CockroachDB defaults to serializable isolation which can result in more transaction restarts that weaker isolation levels, but comes with the advantage that the application programmer doesn't have to worry about anomalies.

Is it possible to query the un-modified data within an Oracle (9.2) Transaction?

I'm looking at doing a data-fix and need to be able to prove that the data I have intended to change is the only data changed. (For example - that a trigger hasn't modified additional columns that I wasn't expecting)
I've been looking at Oracle's Flashback Query process, which would be great, except that this is not enabled on the database in question.
Since this check would be carried out prior to committing the transaction, Oracle must have the "before" information squirreled away somewhere, and I wondered if there is any way of accessing this undo information?
Otherwise, I would potentially have to make a temporary copy of each table and do a compare between the live table and the backup, which may also result in inconsistencies between the backup query time and the update transaction time.
While I'm expecting the answer "no", I'm hoping someone can point me in a better direction than that which I appear to be headed at present.
Thanks!

Dirty Reading in hibernate

Dirty Read: The definition states that
dirty reading occurs when a transaction reads data from a row that has been modified by another transaction but not yet committed.
Assuming the definition is correct, I am unable to fathom any such situation.
Due to the principle of Isolation, the transaction A can not see the uncommitted data of the row that has been modified by transaction B. If transaction B has simply not committed, how transaction A can see it in the first place? It is only possible when both operations are performed under same transaction.
Can someone please explain what am I missing here?
"Dirty", or uncommitted reads (UR) are a way to allow non-blocking reads. Reading uncommitted data is not possible in an Oracle database due to the multi-version concurrency control employed by Oracle; instead of trying to read other transactions' data each transaction gets its own snapshot of data as they existed (committed) at the start of the transaction. As a result all reads are essentially non-blocking.
In databases that use lock-based concurrency control, e.g DB2, uncommitted reads are possible. A transaction using the UR isolation level ignores locks placed by other transactions, and thus it is able to access rows that have been modified but not yet committed.
Hibernate, being an abstraction layer on top of a database, offers the UR isolation level support for databases that have the capability.

AUTONOMOUS_TRANSACTION: pros and cons

Can be autonomous transactions dangerous? If yes, in which situations? When autonomous transactions are necessary?
Yes, autonomous transactions can be dangerous.
Consider the situation where you have your main transaction. It has inserted/updated/deleted rows. If you then, within that, set up an autonomous transaction then either
(1) It will not query any data at all. This is the 'safe' situation. It can be useful to log information independently of the primary transaction so that it can be committed without impacting the primary transaction (which can be useful for logging error information when you expect the primary transaction to be rolled back).
(2) It will only query data that has not been updated by the primary transaction. This is safe, but superfluous. There is no point to the autonomous transaction.
(3). It will query data that has been updated by the primary transaction. This smacks of a poorly thought through design, since you've overwritten something and then need to go back to see what it was before you overwrote it. Sometimes people think that an autonomous transaction will still see the uncommitted changes of the primary transaction, and it won't. It reads the currently committed state of the database, plus any changes made within the autonomous transaction. Some people (often trying autonomous transactions in response to mutating trigger errors) don't care what state the data is in when they try to read it and these people simply shouldn't be allowed access to a database.
(4). It will try to update/delete data that hasn't been updated by the primary transaction. Again, this smacks of poor design. These changes are going to get committed (or rolled back) whether or not the primary transaction succeeds or fails. Worse you risk issue (5) since it is hard to determine, within an autonomous transaction, whether the data has been updated by the primary transaction.
(5). You try to update/delete data that has already been updated by the primary transaction, in which case it will deadlock and end up in an ugly mess.
Can be autonomous transactions dangerous?
Yes.
If yes, in which situations?
When they're misused. For example, when used to make changes to data which should have been rolled back if the rest of the parent transaction is rolled back. Misusing them can cause data corruption because some portions of a change are committed, while others are not.
When are autonomous transactions necessary?
They are necessary when the effects of one transaction must survive, regardless of whether the parent transaction is committed or rolled back. A good example is a procedure which logs the progress and activity of a process to a database table.
When are autonomous transactions necessary?
Check my question: How can LOCK survive COMMIT or how can changes to LOCKed table be propagated to another session without COMMIT and losing LOCK
We ingest business configurations sequentially and should forbid parallel processing.
I use lock to table with configurations and update other tables accordingly. I commit each batched updates to other tables as we can't afford to keep transaction on all records - probability of collision would be near 0.99.
Each failure because of concurrent access is persisted to log for later update attempt.

Resources