Oracle ORA-00054 but probably no locks - oracle

I have a legacy code that is doing the following:
ALTER TABLE A RENAME TO B;
ALTER TABLE C RENAME TO A;
ALTER TABLE B RENAME TO C;
It is swapping 2 tables A and C.
The problem: the third alter table DDL throws the error:
ORA-00054 RESOURCE BUSY
I do not understand how it's possible that there is a lock after the first two DDLs? At this point every transaction should be already commited.
It happens quite often but not always - sometimes it works, sometimes not.
There is no chance that some other session altered this tables data during the swap - first of all it's very short operation, secondly - only one table (table A) is really used, second one is more like archive so nobody is performing any DML on it. And even if we assume to unlikely scenario that someone indeed managed to connect and lock something - it's such a short time that I could understand if it happened once, but it is happening after every 2-3 swaps.
I have no clue. Is it possible that after renaming table some old locks are still active?
thanks

After the second statement runs, but before the third statement runs some other session could take a lock on table B (or potentially a child or parent table or a dependent PL/SQL package, etc).

Related

Preserve exclusive table lock after DDL in Oracle

It is a well known fact that in an Oracle database it is not possible to make a transaction out of multiple DDL statements.
However, is there any way to lock a specific set of database objects within the current connection so that after a DDL query is executed, all locks are held until they are explicitly released?
An obvious solution of this kind doesn't work, because executing the DDL statement automatically commits the transaction, and with it, the locks are released:
LOCK TABLE x ....;
LOCK TABLE y ....;
ALTER TABLE x ....; -- Does not work properly since table locks are released here
ALTER TABLE y ....;
ALTER TABLE x ....;
COMMIT;
The DBMS_LOCK option doesn't work either, because it is an advisory lock, and the concurrent thread must respect this lock and at least be aware of its existence.
Moreover, it is not controlled which statements can be executed by concurrent threads/sessions. It is possible to execute a query only in the current session, and it must be ensured that no intermediate queries on tables X and Y are executed from other sessions until the current session has ended.
Are there any ideas how this can be implemented?
PS: Please don't mention the high-level task or XY problem. There is no high-level task. The question is posed exactly as it is.
A bit of a joke (breaks all dependent PL/SQL), but... ;)
ALTER TABLE x RENAME TO x__my_precious;
ALTER TABLE y RENAME TO y__my_precious;
ALTER TABLE x__my_precious ...;
ALTER TABLE y__my_precious ...;
ALTER TABLE x__my_precious RENAME TO x;
ALTER TABLE y__my_precious RENAME TO y;
I'm pretty sure what you're trying to do isn't possible with Oracle's native transaction control. DDL will always end a transaction, so no lock on that object is going to survive it. Even if you immediately attempted to lock it after the DDL, another waiting session could slip in and obtain the lock before you do.
You can, however, serialize access to the table by utilizing another dummy table or row in a dummy table, assuming you control the code of any process wishing to access the table. If this is the case, then before accessing the table, attempt to lock the dummy table or a row in it first, and only if it succeeds continue with accessing the main table. Then the process that does DDL can take out that same lock (preventing other processes from proceeding), then do the DDL in a subroutine (named PL/SQL block) with PRAGMA AUTONOMOUS_TRANSACTION. That way the DDL ends the autonomous transaction rather than the main one, which still holds the lock on the dummy table.
You have to use a dummy table because if you tried to use the same table you want to modify you'll deadlock yourself. Of course, this only works if you can make all other processes do the lock-the-dummy-table safety check before they can proceed.
Lastly, albeit what I said above should work, it is likely that you're trying to do something you shouldn't do. DDL against hot objects isn't a good idea. Whatever you're trying to do, there is probably a better way than modifying objects on the fly like this. Even if you are able to keep other locked out, you are likely to cause object reference errors, package invalidations, SQL cursor invalidations, etc.. it can be a real mess.

Maria DB, locking table for replacement?

I have a web application using a mariaDB10.4.10 INNO_DB table which is updated every 5 minutes from a script.
The script is working like:
Create a temp table from a table XY and writing data to the temp table from a received csv file. When the data is written, the script starts a transaction, drop the XY table and rename the temp table to the XY, and commits the transaction.
Nevertheless some times a user gets an "XY table does not exists" error working with the application.
I already tried to LOCK the XY table in the transaction but it doesn't change a thing.
How can I solve this? Is there any kind of locking (I thought locking is no longer possible with INNO_DB?)
Do this another way.
Create the temporary table (not as temporary table, as a real table). Fill it as needed. Nobody else knows it's there, you have all the time.
SET autocommit = 0; // OR: LOCK TABLE xy WRITE, tempxy READ;
DELETE FROM xy;
INSERT INTO xy SELECT * FROM tempxy;
COMMIT WORK; // OR: UNLOCK TABLES;
DROP TABLE tempxy;
This way, other customers will see the old table until point 5, then they'll start seeing the new table.
If you use LOCK, customers will stall from point 2 to point 5, which, depending on time needed, might be bad.
At point #3, in some scenarios you might be able to optimize things by deleting only rows that are not in tempxy, and running an INSERT ON DUPLICATE KEY UPDATE at point 4.
Funnily enough, I answered recently another question that was somewhat like yours.
autoincrement
To prevent autoincrement column from overflowing, you can replace COMMIT WORK with ALTER TABLE xy AUTO_INCREMENT=. This is a dirty hack and relies on the fact that this DDL command in MySQL/MariaDB will execute an implicit COMMIT immediately followed by the DDL command itself. If nobody else inserts in that table, it is completely safe. If somebody else inserts in that table at the exact same time your script is running, it should be safe in MySQL 5.7 and derived releases; it might not be in other releases and flavours, e.g. MySQL 8.0 or Percona.
In practice, you fill up tempxy using a new autoincrement from 1 (since tempxy has been just created), then perform the DELETE/INSERT, and update the autoincrement counter to the count of rows you've just inserted.
To be completely sure, you can use a cooperative lock around the DDL command, on the one hand, and anyone else wanting to perform an INSERT, on the other:
script thread other thread
SELECT GET_LOCK('xy-ddl', 30);
SELECT GET_LOCK('xy-ddl', 30);
ALTER TABLE `xy` AUTO_INCREMENT=12345; # thread waits while
# script thread commits
# and runs DDL
SELECT RELEASE_LOCK('xy-ddl'); # thread can acquire lock
INSERT INTO ...
DROP TABLE tempxy; # Gets id = 12346
SELECT RELEASE_LOCK('xy-ddl');

Sybase ASE remote row insert locking

Im working on an application which access a Sybase ASE 15.0.2 ,where the current code access a remote database
(CIS) to insert a row using a proxy table definition (the destination table is a DOL - DRL table - The PK
row is defined as identity ,and is always growing). The current code performs a select to check if the row
already exists to avoid duplicate data to be inserted.
Since the remote table also have a PK definition on the table, i do understand that the PK verification will
be done again prior to commiting the row.
Im planning to remove the select check since its being effectively done again by the PK verification,
but im concerned about if when receiving a file with many duplicates, the table may suffer
some unecessary contention when the data is tried to be commited.
Its not clear to me if Sybase ASE tries to hold the last row and writes the data prior to check for the
duplicate. Also, if the table is very big, im concerned also about the time it will spend looking the
whole index to find duplicates.
I've found some documentation for SQL anywhere, but not ASE in the following link
http://dcx.sybase.com/1200/en/dbusage/insert-how-transact.html
The best i could find is the following explanation
https://groups.google.com/forum/?fromgroups#!topic/comp.databases.sybase/tHnOqptD7X8
But it doesn't enlighten in details how the row is locked (and if there is any kind of
optimization to write it ahead or at the same time of PK checking)
, and also if it will waste a full PK look if im positively inserting a row which the PK
positively greater than the last row commited
Thanks
Alex
Unlike SqlAnywhere there is no option for ASE to set wait_for_commit. The primary key constraint is checked during the insert and not at the commit time. The problem as I understand from your post I see is if you have a mass insert from a file that may contain duplicates is to load into a temp table , check for duplicates, remove the duplicates and then insert the unique ones. Mass insert are lot faster though it still checks for primary key violations. However there is no cost associated as there is no rolling back. The insert statement is always all or nothing. Even if one row is duplicate the entire insert statement will fail. Check before insert in more of error free approach as opposed to use of constraint to the verification because it is going to fail and rollback is going to again be costly.
Thanks Mike
The link does have a very quick explanation about the insert from the CIS perspective. Its a variable to keep an eye on given that CIS may become a representative time consumer
if its performing data and syntax checking if it will be done again when CIS forwards the insert statement to the target server. I was afraid that CIS could have some influence beyond the network traffic/time over the locking/PK checking
Raju
I do agree that avoiding the PK duplication by checking if the row already exists by running a select and doing in a batch, but im currently looking for a stop gap solution, and that may be to perform the insert command in batches of about 50 rows and leave the
duplicate key check for the PK.
Hopefully the PK check will be done over a join of the 50 newly inserted rows, and thus
avoid to traverse the index for each single row...
Ill try to test this and comment back
Alex

Oracle, 2 procedures avoid deadlock

I have two procedures that I want to run on the same table, one uses the birth date and the other updates the name and last name taken from a third table.
The one that uses the birthday to update the age field runs all over the table, and the one that updates the names and last name only updates the rows that appear on the third table based on a key.
So I launched both and got deadlocked! Is there a way to prioritize any of them? I read about the nowait and skip locked for the update but then, how would I return to the ones skipped?
Hope you can help me on this!!
One possibility is to lock all rows you will update at once. Doing all updates in a single update statment will accomplish this. Or
select whatever from T
where ...
for update;
Another solution is to create what I call a "Gatekeeper" table. Both procedures need to lock the Gatekeeper table in exclusive mode before updating the table in question. The second procedure will block until the first commits but won't deadlock. In 11g you can create a table with no space allocated.
A variation is to insert a row in the Gatekeeper. Then lock only that row with select for update. Then you can use the Gatekeeper in other situations.
I would guess that you got locked because the update for all the rows and the update for a small set of rows accessed rows in different orders.
The former used a full scan and reached Raw A first, then went on to other rows, eventually trying to lock Row B. However, the other query was driven from an index or a join and already had Row B locked, and was off to lock Row A when it found it was already locked.
So, the fix: firstly, having an age column that needs to be constantly modified is a really bad idea. Perhaps it was done to allow indexing of age, but with a correctly written query an index on date of birth will let you find the same records just as quickly. You've broken normalisation rules and ended up coding yourself a deadlocking application. Hopefully you are only updating the rows that need to be updated, not all of them regardless -- I mean, that would just be insane.
The best solution is to get rid of that design flaw.
The not so good solution is to deconflict your queries by running them at different times or by using DBMS_Lock so that only one of them can run at any time.

Keeping tables consistent during trigger execution?

I have a trigger that checks another couple of tables before allowing a row to be inserted. However between the time I check the other tables and insert the row the other tables may get updated.
How do I ensure the tables I'm checking remain in a consistent state until after the new row is inserted? I was thinking of taking locks out but everything I've read boils down to if you are not leaving locking to Oracle you're almost certainly doing it wrong.
Oracle is already doing this for you, when you perform a select it will look at all tables as of the time the transaction started ( the time of the first DML ). This wont stop the data from being changed under you though, your transaction just wont see it being changed. If you want to stop that data from being changed then you can use "SELECT FOR UPDATE" as Justin Cave suggests.
I would seriously question what you are doing though, triggers, except in the most trivial cases, almost always lead to unexpected side effects.

Resources