oracle: release lock on a single row (while retaining locks on other rows)? - oracle

With Oracle is it possible to (with a single database connection):
lock a single row (row1)
then lock another row (row2)
release the lock on row1 (retaining the lock on row2)
obtain a lock on another row (row3)
release the lock on row2 (retaining the lock on row3)
release the lock on row3
I realize Oracle supports many different kinds of locks (I've found this very helpful: http://broadh2o.net/docs/database/oracle/oracleLocks.html ), so perhaps the answer depends on which kind of locks are chosen. I'm interested in exclusive locks - i.e. locks which prevent other connections from updating the row.
I would like to know if I can perform operations 1-6 using a single database connection. I certainly could use a separate database connection for each row. It seems that locks are released using COMMIT/ROLLBACK, so that would suggest releasing the lock on a single row isn't possible.

You cannot selectively release locks on rows. Once you lock row1, that lock will only be released at the end of your transaction. But the end of your transaction will also release any other locks held by your transaction (i.e. the lock on row2).
Depending on the business problem you are trying to solve, potentially you don't really want to lock individual rows. Potentially, you want to use the dbms_lock package to acquire and release some user-defined locks. If you have user-defined locks lock1, lock2, and lock3, then you could acquire and release the three locks just as you've outlined within a single transaction. Setting up user-defined locking, however, can be quite dangerous both because it requires a lot more work from developers who have to protect the right sections of their code with the appropriate locks and because it is possible to request a user-defined lock that will not be released when a transaction commits or is rolled back which makes it possible to really shoot yourself in the foot if you don't handle your exceptions correctly.

Related

Oracle v19: can ongoing transactions block concurrent deletes on involved tables for extended periods?

We have a severe issue with threads hanging in operations to an Oracle DB (v19, connected to via JDBC connections).
The situation frequently happens while our application runs a big transaction within which it does a lot of major (i.e. quite complicated, lots of joins, etc.) queries and then updates a bunch of rows. These transactions can take several minutes.
As we were able to analyze so far the transaction processing blocks other concurrent tasks when they try to delete individual entries from tables that are involved in said transaction. Concurrent selects and also updates to these same tables work fine! It's only deletes that have issues! And, as we were able to "proof", this happens even for deletes of individual entries that for sure do not interfere with or touch on any entry involved in the ongoing transaction.
While we first suspected Hibernate to interfere and do funny things for deletions we had to learn that even deletes executed via SQLDeveloper (i.e. triggered "manually" by a completely unrelated DB session and client) do hang during such periods.
To us it almost seems as if an ongoing transaction does not only lock specific rows from manipulation but locks entire tables.
But can that really be that a transaction block entire tables from concurrent delete operations for extended periods?
We think that would be absurd but - as we had to learn and can easily reproduce - deleting entries from tables touched by our long-running transaction invariably hang. Several times we also witnessed that - as soon as the transaction finishes - those deletes that haven't timed out, yet, continue and run to completion.
We are not aware of doing anything weird or unusual in our Hibernate-based application. We certainly don't fiddle with any locking mechanism or such. Any idea or hint what could cause these hangs and/or in which direction to investigate further to resolve this?
Later addition:
We are currently considering the following work-around: we add a column to these tables where we mark entries as being "to-be-deleted" (instead of actually deleting them as we do now). We then run a regular job during times (e.g. nightly) which actually deletes these entries. We "only" need to make sure that no transaction is ever executed on these tables while that delete-job runs.
I really hate that approach, esp. since it will require to add another condition to many queries to exclude those "virtually deleted" entries but we have no better idea so far.

Difference between LockModeType Jpa

I am confused about the working of LockModeTypes in JPA:
LockModeType.Optimistic
it increments the version while committing.
Question here is : If I have version column in my entity and if I don't specify this lock mode then also it works similarly then what is the use of it?
LockModeType.OPTIMISTIC_FORCE_INCREMENT
Here it increments the version column even though the entity is not updated.
but what is the use of it if any other process updated the same row before this transaction is committed? this transaction is anyways going to fail. so what is the use of this LockModeType.
LockModeType.PESSIMISTIC_READ
This lock mode issues a select for update nowait(if no hint timeout specified)..
so basically this means that no other transaction can update this row until this transaction is committed, then its basically a write lock, why its named a Read lock?
LockModeType.PESSIMISTIC_WRITE
This lock mode also issues a select for update nowait (if no hint timeout specified).
Question here is what is the difference between this lock mode and LockModeType.PESSIMISTIC_READ as I see both fires same queries?
LockModeType.PESSIMISTIC_FORCE_INCREMENT
this does select for update nowait (if no hint timeout specified) and also increments the version number.
I totally didn't get the use of it.
why a version increment is required if for update no wait is there?
I would first differentiate between optimistic and pessimistic locks, because they are different in their underlying mechanism.
Optimistic locking is fully controlled by JPA and only requires additional version column in DB tables. It is completely independent of underlying DB engine used to store relational data.
On the other hand, pessimistic locking uses locking mechanism provided by underlying database to lock existing records in tables. JPA needs to know how to trigger these locks and some databases do not support them or only partially.
Now to the list of lock types:
LockModeType.Optimistic
If entities specify a version field, this is the default. For entities without a version column, using this type of lock isn't guaranteed to work on any JPA implementation. This mode is usually ignored as stated by ObjectDB. In my opinion it only exists so that you may compute lock mode dynamically and pass it further even if the lock would be OPTIMISTIC in the end. Not very probable usecase though, but it is always good API design to provide an option to reference even the default value.
Example:
`LockModeType lockMode = resolveLockMode();
A a = em.find(A.class, 1, lockMode);`
LockModeType.OPTIMISTIC_FORCE_INCREMENT
This is a rarely used option. But it could be reasonable, if you want to lock referencing this entity by another entity. In other words you want to lock working with an entity even if it is not modified, but other entities may be modified in relation to this entity.
Example: We have entity Book and Shelf. It is possible to add Book to Shelf, but book does not have any reference to its shelf. It is reasonable to lock the action of moving a book to a shelf, so that a book does not end up in another shelf (due to another transaction) before end of this transaction. To lock this action, it is not sufficient to lock current book shelf entity, as the book does not have to be on a shelf yet. It also does not make sense to lock all target bookshelves, as they would be probably different in different transactions. The only thing that makes sense is to lock the book entity itself, even if in our case it does not get changed (it does not hold reference to its bookshelf).
LockModeType.PESSIMISTIC_READ
this mode is similar to LockModeType.PESSIMISTIC_WRITE, but different in one thing: until write lock is in place on the same entity by some transaction, it should not block reading the entity. It also allows other transactions to lock using LockModeType.PESSIMISTIC_READ. The differences between WRITE and READ locks are well explained here (ObjectDB) and here (OpenJPA). If an entity is already locked by another transaction, any attempt to lock it will throw an exception. This behavior can be modified to waiting for some time for the lock to be released before throwing an exception and roll back the transaction. In order to do that, specify the javax.persistence.lock.timeout hint with the number of milliseconds to wait before throwing the exception. There are multiple ways to do this on multiple levels, as described in the Java EE tutorial.
LockModeType.PESSIMISTIC_WRITE
this is a stronger version of LockModeType.PESSIMISTIC_READ. When WRITE lock is in place, JPA with the help of the database will prevent any other transaction to read the entity, not only to write as with READ lock.
The way how this is implemented in a JPA provider in cooperation with underlying DB is not prescribed. In your case with Oracle, I would say that Oracle does not provide something close to a READ lock. SELECT...FOR UPDATE is really rather a WRITE lock. It may be a bug in hibernate or just a decision that, instead of implementing custom "softer" READ lock, the "harder" WRITE lock is used instead. This mostly does not break consistency, but does not hold all rules with READ locks. You could run some simple tests with READ locks and long running transactions to find out if more transactions are able to acquire READ locks on the same entity. This should be possible, whereas not with WRITE locks.
LockModeType.PESSIMISTIC_FORCE_INCREMENT
this is another rarely used lock mode. However, it is an option where you need to combine PESSIMISTIC and OPTIMISTIC mechanisms. Using plain PESSIMISTIC_WRITE would fail in following scenario:
transaction A uses optimistic locking and reads entity E
transaction B acquires WRITE lock on entity E
transaction B commits and releases lock of E
transaction A updates E and commits
in step 4, if version column is not incremented by transaction B, nothing prevents A from overwriting changes of B. Lock mode LockModeType.PESSIMISTIC_FORCE_INCREMENT will force transaction B to update version number and causing transaction A to fail with OptimisticLockException, even though B was using pessimistic locking.
LockModeType.NONE
this is the default if entities don't provide a version field. It means that no locking is enabled conflicts will be resolved on best effort basis and will not be detected. This is the only lock mode allowed outside of a transaction

Dirty Reading in hibernate

Dirty Read: The definition states that
dirty reading occurs when a transaction reads data from a row that has been modified by another transaction but not yet committed.
Assuming the definition is correct, I am unable to fathom any such situation.
Due to the principle of Isolation, the transaction A can not see the uncommitted data of the row that has been modified by transaction B. If transaction B has simply not committed, how transaction A can see it in the first place? It is only possible when both operations are performed under same transaction.
Can someone please explain what am I missing here?
"Dirty", or uncommitted reads (UR) are a way to allow non-blocking reads. Reading uncommitted data is not possible in an Oracle database due to the multi-version concurrency control employed by Oracle; instead of trying to read other transactions' data each transaction gets its own snapshot of data as they existed (committed) at the start of the transaction. As a result all reads are essentially non-blocking.
In databases that use lock-based concurrency control, e.g DB2, uncommitted reads are possible. A transaction using the UR isolation level ignores locks placed by other transactions, and thus it is able to access rows that have been modified but not yet committed.
Hibernate, being an abstraction layer on top of a database, offers the UR isolation level support for databases that have the capability.

AUTONOMOUS_TRANSACTION: pros and cons

Can be autonomous transactions dangerous? If yes, in which situations? When autonomous transactions are necessary?
Yes, autonomous transactions can be dangerous.
Consider the situation where you have your main transaction. It has inserted/updated/deleted rows. If you then, within that, set up an autonomous transaction then either
(1) It will not query any data at all. This is the 'safe' situation. It can be useful to log information independently of the primary transaction so that it can be committed without impacting the primary transaction (which can be useful for logging error information when you expect the primary transaction to be rolled back).
(2) It will only query data that has not been updated by the primary transaction. This is safe, but superfluous. There is no point to the autonomous transaction.
(3). It will query data that has been updated by the primary transaction. This smacks of a poorly thought through design, since you've overwritten something and then need to go back to see what it was before you overwrote it. Sometimes people think that an autonomous transaction will still see the uncommitted changes of the primary transaction, and it won't. It reads the currently committed state of the database, plus any changes made within the autonomous transaction. Some people (often trying autonomous transactions in response to mutating trigger errors) don't care what state the data is in when they try to read it and these people simply shouldn't be allowed access to a database.
(4). It will try to update/delete data that hasn't been updated by the primary transaction. Again, this smacks of poor design. These changes are going to get committed (or rolled back) whether or not the primary transaction succeeds or fails. Worse you risk issue (5) since it is hard to determine, within an autonomous transaction, whether the data has been updated by the primary transaction.
(5). You try to update/delete data that has already been updated by the primary transaction, in which case it will deadlock and end up in an ugly mess.
Can be autonomous transactions dangerous?
Yes.
If yes, in which situations?
When they're misused. For example, when used to make changes to data which should have been rolled back if the rest of the parent transaction is rolled back. Misusing them can cause data corruption because some portions of a change are committed, while others are not.
When are autonomous transactions necessary?
They are necessary when the effects of one transaction must survive, regardless of whether the parent transaction is committed or rolled back. A good example is a procedure which logs the progress and activity of a process to a database table.
When are autonomous transactions necessary?
Check my question: How can LOCK survive COMMIT or how can changes to LOCKed table be propagated to another session without COMMIT and losing LOCK
We ingest business configurations sequentially and should forbid parallel processing.
I use lock to table with configurations and update other tables accordingly. I commit each batched updates to other tables as we can't afford to keep transaction on all records - probability of collision would be near 0.99.
Each failure because of concurrent access is persisted to log for later update attempt.

Oracle transaction read-consistency?

I have a problem understanding read consistency in database (Oracle).
Suppose I am manager of a bank . A customer has got a lock (which I don't know) and is doing some updating. Now after he has got a lock I am viewing their account information and trying to do some thing on it. But because of read consistency I will see the data as it existed before the customer got the lock. So will not that affect inputs I am getting and the decisions that I am going to make during that period?
The point about read consistency is this: suppose the customer rolls back their changes? Or suppose those changes fail because of a constraint violation or some system failure?
Until the customer has successfully committed their changes those changes do not exist. Any decision you might make on the basis of a phantom read or a dirty read would have no more validity than the scenario you describe. Indeed they have less validity, because the changes are incomplete and hence inconsistent. Concrete example: if the customer's changes include making a deposit and making a withdrawal, how valid would your decision be if you had looked at the account when they had made the deposit but not yet made the withdrawal?
Another example: a long running batch process updates the salary of every employee in the organisation. If you run a query against employees' salaries do you really want a report which shows you half the employees with updated salaries and half with their old salaries?
edit
Read consistency is achieved by using the information in the UNDO tablespace (rollback segments in the older implementation). When a session reads data from a table which is being changed by another session, Oracle retrieves the UNDO information which has been generated by that second session and substitutes it for the changed data in the result set presented to the first session.
If the reading session is a long running query it might fail because due to the notorious ORA-1555: snapshot too old. This means the UNDO extent which contained the information necessary to assemble a read consistent view has been overwritten.
Locks have nothing to do with read consistency. In Oracle writes don't block reads. The purpose of locks is to prevent other processes from attempting to change rows we are interested in.
For systems that have large number of users, where users may "hold" the lock for a long time the Optimistic Offline Lock pattern is usually used, i.e. use the version in the UPDATE ... WHERE statement.
You can use a date, version id or something else as the row version. Also the virtual columm ORA_ROWSCN may be used but you need to read up on it first.
When a record is locked due to changes or an explicit lock statement, an entry is made into the header of that block. This is called an ITL (interested transaction list). When you come along to read that block, your session sees this and knows where to go to get the read consistent copy from the rollback segment.

Resources