I need to execute a bunch of (up to ~1000000) sql statements on an Oracle database. These statements should result in a referentially consistent state at the end, and all the statements should be rolled back if an error occurs. These statements do not come in a referential order. So if foreign key constraints are enabled, one of the statements may cause a foreign key violation even though, this violation would be fixed with a statement that would be executed later on.
I tried disabling foreign keys first and enabling them after all statements were executed. I thought I would be able to roll back when there was an actual foreign key violation. I was wrong though, I found out that every DDL statement in Oracle started with a commit, so there was no way to rollback the statements this way. Here is my script for disabling foreign keys:
begin
for i in (select constraint_name, table_name from user_constraints
where constraint_type ='R' and status = 'ENABLED')
LOOP execute immediate 'alter table '||i.table_name||' disable constraint
'||i.constraint_name||'';
end loop;
end;
After some research, I found out that it was recommended to execute DDL statements, like in this case, in an autonomous transaction. So I tried to run DDL statements in an autonomous transaction. This resulted in the following error:
ORA-00054: resource busy and acquire with NOWAIT specified
I am guessing this is because the main transaction still has DDL lock on the tables.
Am I doing something wrong here, or is there any other way to make this scenario work?
There's several potential approaches.
The first thing to consider is that whatever you do at the table level will apply to all sessions using that table. If you haven't got exclusive access to that table, you probably don't want to drop/recreate constraints, or disable/enable them.
The second thing to consider is that you probably don't want to be in a position of rolling back a million inserts/updates. Rolling back can be SLOW.
Generally I would load into a temporary table. Then do a single INSERT from the temporary table into the destination table. As a single statement, Oracle will apply all the check constraints at the end.
If you can't go through a temporary table (eg updates to existing data), before starting make the constraints deferrable initially immediate. Then, within your session,
SET CONSTRAINTS emp_job_nn, emp_salary_min DEFERRED;
You can then apply the changes and, when you commit, the constraints will be validated.
You should aquaint yourself with DML error logging as it can help identify any rows causing violations.
Related
It is a well known fact that in an Oracle database it is not possible to make a transaction out of multiple DDL statements.
However, is there any way to lock a specific set of database objects within the current connection so that after a DDL query is executed, all locks are held until they are explicitly released?
An obvious solution of this kind doesn't work, because executing the DDL statement automatically commits the transaction, and with it, the locks are released:
LOCK TABLE x ....;
LOCK TABLE y ....;
ALTER TABLE x ....; -- Does not work properly since table locks are released here
ALTER TABLE y ....;
ALTER TABLE x ....;
COMMIT;
The DBMS_LOCK option doesn't work either, because it is an advisory lock, and the concurrent thread must respect this lock and at least be aware of its existence.
Moreover, it is not controlled which statements can be executed by concurrent threads/sessions. It is possible to execute a query only in the current session, and it must be ensured that no intermediate queries on tables X and Y are executed from other sessions until the current session has ended.
Are there any ideas how this can be implemented?
PS: Please don't mention the high-level task or XY problem. There is no high-level task. The question is posed exactly as it is.
A bit of a joke (breaks all dependent PL/SQL), but... ;)
ALTER TABLE x RENAME TO x__my_precious;
ALTER TABLE y RENAME TO y__my_precious;
ALTER TABLE x__my_precious ...;
ALTER TABLE y__my_precious ...;
ALTER TABLE x__my_precious RENAME TO x;
ALTER TABLE y__my_precious RENAME TO y;
I'm pretty sure what you're trying to do isn't possible with Oracle's native transaction control. DDL will always end a transaction, so no lock on that object is going to survive it. Even if you immediately attempted to lock it after the DDL, another waiting session could slip in and obtain the lock before you do.
You can, however, serialize access to the table by utilizing another dummy table or row in a dummy table, assuming you control the code of any process wishing to access the table. If this is the case, then before accessing the table, attempt to lock the dummy table or a row in it first, and only if it succeeds continue with accessing the main table. Then the process that does DDL can take out that same lock (preventing other processes from proceeding), then do the DDL in a subroutine (named PL/SQL block) with PRAGMA AUTONOMOUS_TRANSACTION. That way the DDL ends the autonomous transaction rather than the main one, which still holds the lock on the dummy table.
You have to use a dummy table because if you tried to use the same table you want to modify you'll deadlock yourself. Of course, this only works if you can make all other processes do the lock-the-dummy-table safety check before they can proceed.
Lastly, albeit what I said above should work, it is likely that you're trying to do something you shouldn't do. DDL against hot objects isn't a good idea. Whatever you're trying to do, there is probably a better way than modifying objects on the fly like this. Even if you are able to keep other locked out, you are likely to cause object reference errors, package invalidations, SQL cursor invalidations, etc.. it can be a real mess.
We have a multi-threaded batch job ending up in deadlock. I am getting conflicting answers from our dba's as to what actually causes the deadlock.
Caused by: java.sql.SQLException: ORA-00060: deadlock detected while waiting for resource
The error output references the sql for inserting into table A. Every row going into table A should be unique. Table A has foreign keys on two other tables, both of which are indexed and primary keys made up of two columns. Many rows in Table A can point to the same FK in the parent tables. Our code handles FK errors by trying to insert into parent tables and then trying into Table A again.
The sql in the trace log refers to the Table A insert sql (does not show param binding values). Does this mean definitively that there are two identical sql statements trying to be inserted into Table A in which case our prior logic is not thread-safe somewhere? Or could it really be that there are two inserts both referencing an unsatisfied FK? And the deadlock occurs from our error handling in trying to insert into the parent table. If so, would the sql in the trace not then reference the parent table sql?
Or perversely, does the original insert attempt put a lock on the row and then after handling the error, does the second attempt of the insert cause the deadlock? Any further debugging assistance?
There's not much info to work with, but my guess would be that two threads are trying to insert the same rows into one of the 'two other tables' at the same time. Idea on debugging below
Use a trigger on table A and on the other two tables ( 3 triggers in total) that write the inserted data to logging tables in an autonomous transaction that commits. This way you can see the uncommitted inserts that were rolled back due to the deadlock (the rows that exists in the logging tables but not in the actual tables are the ones that were rolled back).
HTH, KR
in my pl\sql process, there is an execution of "alter table exchange parition.."
on some table.
the problem is that during that operation - other users can try to access the target table.
one process that executed a select query on that table, got this error:
ORA-08103 object no longer exists.
i think that it is not the same like 'object or view doesn't exist'.
i think that 'object no longer exists' error, come when the process start ok,
and then the exchange (or other operation) come from the side and the process
can't bo done.
the chance is very low that it will happen, because the exchange is very very fast.
but for this case, there is any idea how to solve it? how to prevent this situation?
maybe a way to execute the exchange only if no-one touch the table?
thanks.
This will happen when session #2 alters the table in the table after session #1 has opened its cursor but before it is finished fetching the rows from that cursor.
I don't think there is a foolproof way to prevent the exchange from happening unless you are willing to change the code that other users are using to access the target table.
LOCK TABLE mytable IN EXCLUSIVE MODE will not wait for SELECT statements to complete, nor will it prevent new SELECT statements from starting, so acquiring an exclusive lock before attempting the ALTER TABLE will not work.
If you want to prevent the ALTER TABLE from happening at the same time as SELECTs, you need both to depend on acquiring the same lock. A relatively robust way to do that would be to use the DBMS_LOCK package to allocate a lock for that table. Call that lock "mytablelock".
Then, using DBMS_LOCK, your SELECT sessions would need to acquire "shared" locks on "mytablelock" before progressing. Your alter table sessions would need to acquire an "exclusive" lock on "mytablelock" before progressing.
This scheme would allow multiple SELECT sessions to run without interfering with each other, but it would prevent the ALTER TABLE from running while any SELECT was running.
A (much) less robust, but simpler, way to do it would be to change the SELECT statements issued from the other query into SELECT FOR UPDATES. But that's a recipe for lots of unnecessary waits and deadlock errors.
I recently received a case which my client came across the ORA-00001: unique constraint violated error. This happened when a program tried to truncate two tables and then insert data into them.
From the error-log file, the truncate step was completed,
delete from INTERNET_GROUP
delete from INTERNET_ITEM
BUT right after this, the insertion to the Internet_group table triggered the ORA-00001 error. I am wondering if there is any database settings related to this error? I never used Oracle and am wondering if Oracle puts a lock on a row with SELECT statement, in which case the row is locked and not deleted somehow? Any help is appreciated.
Please know that there is a difference between truncate and delete. You say you truncated the table, but you mention "delete from" . That is entirely different.
If you're sure you want to empty the tables, try replacing with
truncate table internet_group reuse storage;
Mind you that a commit is not necessary with the truncate statement as this is considered a DDL (data definition language) statement and not a DML (Data modification language) statement like updates and deletes.
Also, there is no row locking on selects. But changes are only applied and visible for other sessions in the database when commit-ed.
I guess that is wat happened; you deleted the records but did not execute a commit (yet) and subsequently inserted new records.
edit:
I now realize you're probably inserting multiple records....
The other option might be, that the data itself causes a violation. Can you please provide the constraints on the table? There must be a primary key or unique constraint. You might want to hold that against your dataset.
We have a table TRANSMISSIONS(ID, NAME) which behaves funny in the following ways:
The statement to add a foreign key in another table referencing TRANSMISSIONS.ID won't finish
The statement to add a column to TRANSMISSIONS won't finish
The statement to disable/drop a unique constraint won't finish
The statement to disable/drop a trigger won't finish
TRANSMISSION's primary key is ID, there is also a unique constraint on NAME - therefore there are indexes on ID and NAME. We also have a trigger which creates values for column ID using a sequence, so that INSERT statements do not need to provide a value for ID.
Besides TRANSMISSIONS, there are two more tables behaving like this. For other tables, the above-mentioned statements work fine.
The database is used in an application with Hibernate and due to an incorrect JPA configuration we produced high values for ID during a time. Note that we use the trigger only for "manual" INSERT statements and that Hibernate produces ID values itself, also using the sequence.
The first thought was that the problems were due to the high IDs but we have the problems also with tables that never had such high IDs.
Anyways we suspected that the indexes might be fragmented somehow and called ALTER INDEX TRANSMISSIONS_PK SHRINK SPACE COMPACT, which ran through but showed no effect.
Also, we wanted to call ALTER TABLE TRANSMISSIONS SHRINK SPACE COMPACT which didn't work because we needed to call first ALTER TABLE TRANSMISSIONS ENABLE ROW MOVEMENT which never finished.
We have another instance of the database which does not behave in such a funny way. So we think it might be that in the course of running the application the database got somehow into an inconsistent state.
Does someone have any suggestions what might have gone out of control/into an inconsitent state?
More hints:
There are no locks present on any objects in the database (according to information in v$lock and v$locked_object)
We tried all these statements in SQL Developer and also using SQLPlus (command-line tool).