in my pl\sql process, there is an execution of "alter table exchange parition.."
on some table.
the problem is that during that operation - other users can try to access the target table.
one process that executed a select query on that table, got this error:
ORA-08103 object no longer exists.
i think that it is not the same like 'object or view doesn't exist'.
i think that 'object no longer exists' error, come when the process start ok,
and then the exchange (or other operation) come from the side and the process
can't bo done.
the chance is very low that it will happen, because the exchange is very very fast.
but for this case, there is any idea how to solve it? how to prevent this situation?
maybe a way to execute the exchange only if no-one touch the table?
thanks.
This will happen when session #2 alters the table in the table after session #1 has opened its cursor but before it is finished fetching the rows from that cursor.
I don't think there is a foolproof way to prevent the exchange from happening unless you are willing to change the code that other users are using to access the target table.
LOCK TABLE mytable IN EXCLUSIVE MODE will not wait for SELECT statements to complete, nor will it prevent new SELECT statements from starting, so acquiring an exclusive lock before attempting the ALTER TABLE will not work.
If you want to prevent the ALTER TABLE from happening at the same time as SELECTs, you need both to depend on acquiring the same lock. A relatively robust way to do that would be to use the DBMS_LOCK package to allocate a lock for that table. Call that lock "mytablelock".
Then, using DBMS_LOCK, your SELECT sessions would need to acquire "shared" locks on "mytablelock" before progressing. Your alter table sessions would need to acquire an "exclusive" lock on "mytablelock" before progressing.
This scheme would allow multiple SELECT sessions to run without interfering with each other, but it would prevent the ALTER TABLE from running while any SELECT was running.
A (much) less robust, but simpler, way to do it would be to change the SELECT statements issued from the other query into SELECT FOR UPDATES. But that's a recipe for lots of unnecessary waits and deadlock errors.
Related
It is a well known fact that in an Oracle database it is not possible to make a transaction out of multiple DDL statements.
However, is there any way to lock a specific set of database objects within the current connection so that after a DDL query is executed, all locks are held until they are explicitly released?
An obvious solution of this kind doesn't work, because executing the DDL statement automatically commits the transaction, and with it, the locks are released:
LOCK TABLE x ....;
LOCK TABLE y ....;
ALTER TABLE x ....; -- Does not work properly since table locks are released here
ALTER TABLE y ....;
ALTER TABLE x ....;
COMMIT;
The DBMS_LOCK option doesn't work either, because it is an advisory lock, and the concurrent thread must respect this lock and at least be aware of its existence.
Moreover, it is not controlled which statements can be executed by concurrent threads/sessions. It is possible to execute a query only in the current session, and it must be ensured that no intermediate queries on tables X and Y are executed from other sessions until the current session has ended.
Are there any ideas how this can be implemented?
PS: Please don't mention the high-level task or XY problem. There is no high-level task. The question is posed exactly as it is.
A bit of a joke (breaks all dependent PL/SQL), but... ;)
ALTER TABLE x RENAME TO x__my_precious;
ALTER TABLE y RENAME TO y__my_precious;
ALTER TABLE x__my_precious ...;
ALTER TABLE y__my_precious ...;
ALTER TABLE x__my_precious RENAME TO x;
ALTER TABLE y__my_precious RENAME TO y;
I'm pretty sure what you're trying to do isn't possible with Oracle's native transaction control. DDL will always end a transaction, so no lock on that object is going to survive it. Even if you immediately attempted to lock it after the DDL, another waiting session could slip in and obtain the lock before you do.
You can, however, serialize access to the table by utilizing another dummy table or row in a dummy table, assuming you control the code of any process wishing to access the table. If this is the case, then before accessing the table, attempt to lock the dummy table or a row in it first, and only if it succeeds continue with accessing the main table. Then the process that does DDL can take out that same lock (preventing other processes from proceeding), then do the DDL in a subroutine (named PL/SQL block) with PRAGMA AUTONOMOUS_TRANSACTION. That way the DDL ends the autonomous transaction rather than the main one, which still holds the lock on the dummy table.
You have to use a dummy table because if you tried to use the same table you want to modify you'll deadlock yourself. Of course, this only works if you can make all other processes do the lock-the-dummy-table safety check before they can proceed.
Lastly, albeit what I said above should work, it is likely that you're trying to do something you shouldn't do. DDL against hot objects isn't a good idea. Whatever you're trying to do, there is probably a better way than modifying objects on the fly like this. Even if you are able to keep other locked out, you are likely to cause object reference errors, package invalidations, SQL cursor invalidations, etc.. it can be a real mess.
I have a pl/sql script that clears (via delete from statement) and populates several depended tables like this:
delete from table-A
insert into table-A values(...)
delete from table-B
insert into table-B values(...)
These operations require ~ 10 seconds to complete and I'd like to stop all sql queries that try to read data from table-A or table-B while tables are updating. These queries should stop and continue execution when table-A and table-B are completely updated.
What is the proper way to do this?
As others have pointed out, Oracle's basic concurrency model is that writers do not block readers and readers do not block writers. You can't stop a simple select from running. Your queries will see the data as of the SCN that they started executing (assuming that you're using the default read committed transaction isolation level) so they will have a consistent view of the data before your updates started.
You could potentially acquire a custom named lock using dbms_lock.request. You would need to acquire this lock before running your updates and every session that queries the tables would also need to acquire the lock before it starts to query the tables. That will, obviously, decrease the scalability of your application but it will accomplish what you appear to be asking for. Presumably, the sessions doing queries can acquire the lock in shared mode while the session doing the updates would need to acquire it in exclusive mode.
I have two concurrent transactions executing this bit of code (simplified for illustration purposes):
#Transactional
public void deleteAccounts() {
List<User> users = em.createQuery("select u from User", User.class)
.setLockMode(LockModeType.PESSIMISTIC_WRITE)
.getResultList();
for (User user : users) {
em.remove(user);
}
}
My understanding is that one of the transactions, say transaction A, should execute the SELECT first, lock all the rows it needs and then go on with the DELETEs while the other transaction should wait for A's commit before performing the SELECT. However, this code is deadlocking. Where am I wrong?
The USER table probably has a lot of foreign keys referring to it. If any of them are un-indexed Oracle will lock the entire child table while it deletes the row from the parent table. If multiple statements run at the same time, even for a different user, the same child tables will be locked. Since the order of those recursive operations cannot be controlled it is possible that multiple sessions will lock the same resources in a different order, causing a deadlock.
See this section in the Concepts manual for more information.
To resolve this, add indexes to any un-indexed foreign keys. If the column names are standard a script like this could help you find potential candidates:
--Find un-indexed foreign keys.
--
--Foreign keys.
select owner, table_name
from dba_constraints
where r_constraint_name = 'USER_ID_PK'
and r_owner = 'THE_SCHEMA_NAME'
minus
--Tables with an index on the relevant column.
select table_owner, table_name
from dba_ind_columns
where column_name = 'USER_ID';
When you use a PESSIMISTIC_WRITE JPA generally traslate it to SELECT FOR UPDATE this make a lock in the database, not necessary for a row it depends of the database and how you configure the lock, by default the lock is by page or block not for row, so check your database documentation to confirm the how your database make the lock, also you can change it so you can apply the lock for a row.
When you call the method deleteAccounts it starts a new transaction and the lock will be active until the transaction commit (or rollback) in this case when the method has finished, if other transaction want to acquire the same lock it can't and I think this is why you have the dead lock, I suggest you to try annother mechanism maybe an optimistic lock, or a lock by entity.
You can try given a timeout to the acquire the lock so:
em.createQuery("select u from User", User.class)
.setLockMode(LockModeType.PESSIMISTIC_WRITE)
.setHint("javax.persistence.lock.timeout", 5000)
.getResultList();
I found a good article that explains better this error, it is cause by the database:
Oracle automatically detects deadlocks and resolves them by rolling
back one of the transactions/statements involved in the deadlock, thus
releasing one set of resources/data locked by that transaction. The
session that is rolled back will observe Oracle error: ORA-00060:
deadlock detected while waiting for resource. Oracle will also produce
detailed information in a trace file under database's UDUMP directory.
Most commonly these deadlocks are caused by the applications that
involve multi table updates in the same transaction and multiple
applications/transactions are acting on the same table at the same
time. These multi-table deadlocks can be avoided by locking tables in
same order in all applications/transactions, thus preventing a
deadlock condition.
Please anyone explain locking mode in Oracle i.e. Share, Exclusive and Update lock. I found many theories on this and according to that
Share lock : Nobody can change data,Read only purpose
Exclusive lock : Only one user/connection are allow to change the data.
Update lock : Rows are locked till user made commit/rollback.
Then, I tried shared to check how it works
SQL> lock table emp in share mode;
Table(s) Locked.
SQL> update emp set sal=sal+10;
14 rows updated.
Then, I found that, user can change data after share lock. Then, what makes it different from exclusive lock and update lock.
Another question, how Update lock and exclusive lock are different with each other, even they seems almost equivalent.
Posting explanation for future visitors, and it also gives the answer.
Shared lock
Before I begin let me first say that there are 5 types of table locks - row shared, row exclusive, shared, shared row exclusive and exclusive. And shared lock is one among these. Also, please note that there are row locks, which are different than table locks. Follow the link I have provided in end to read about all this.
A shared lock is acquired on the table specified in following statement – LOCK TABLE table IN SHARE MODE;
This lock prevents other transactions from getting “row exclusive” (this lock is used by INSERT, UPDATE and DELETE statement), “shared row exclusive” and “exclusive” table locks, otherwise everything is permitted.
So, this means that a shared lock will block other transactions from executing INSERT, UPDATE and DELETE statements on that table but will allow other transactions to update the rows using “SELECT … FOR UPDATE” statement because for this statement a “row shared” lock is required, and it is permitted when a “shared” lock is required.
Below table is a good summary of locks and what's permitted.
Since many users will follow this question so I decided to go one more step further and put my learning notes, I hope folks will be benefited from it:
Source of this information and also excellent reading about Oracle locks.
It's very well explained in the documentation: http://docs.oracle.com/cd/E11882_01/server.112/e41084/ap_locks001.htm#SQLRF55502
In your example you locked the table in shared mode. This does not prevent other sessions locking the same object in shared mode, but it does prevent them from locking it in exclusive mode so you could not drop the table (which requires an exclusive lock) while it is being updated (which has a shared lock).
I have some code that is running on several machines, and accesses an Oracle database. I use this database (among other things) as a synchronization object between the different machines by locking rows.
The problem I have is that when my process is starting, there is nothing yet in the database to rely on for synchronization, and my processes get oracle exceptions about unique constraint violated since they all try to insert at the same time.
My solution for now is to catch that precise exception and ignore it, but I don't really like having exceptions being thrown in the normal workflow of my application.
Is there a better way to "test and insert" atomically in a database ? Locking the whole table/partition when inserting a row is not an acceptable solution.
I checked merge into, thinking it was my solution, but it produces the same problem.
You probably want to use DBMS_LOCK, which allows for user application code to implement the same locking model as the Oracle database does in locking rows and other resources. You can create an enqueue of type 'UL' (user lock), and define a resource name, and then have multiple sessions lock to their hearts content, without any dependence on data in a table somewhere. It supports both exclusive and shared locking, so, you have have some processes that can run concurrently (if they take a shared lock) or other processes that run exclusively (if they take an exclusive lock) and they will automatically queue behind the shared lock (if any) that are being held by the other type of process, etc.
It's a a very flexible locking model, and you don't need to rely on any data in any table to implement it.
See the Oracle PL/SQL Packages and Types Reference, for the full scoop on the DBMS_LOCK package.
Hope that helps.
You won't get an error immediately if your PK is policed by a non-unique index, consider:
<<SESSION 1>>
SQL> create table afac (
2 id number,
3 constraint afac_pk primary key (id)
4 deferrable /* will cause the PK to be policed by a non-unique index */
5 );
Table created.
SQL> insert into afac values (1);
1 row created.
<<SESSION 2>>
SQL> insert into afac values (1); /* Will cause session 2 to be blocked */
Session 2 will be blocked until session 1 either commits or rollbacks. I don't know if this mechanism is compatible with your requirement though.