I am trying to run the query below to disable a constraint. But it is not executing.
It is simply hanging there and acquiring a lock on the table XYZ.
dbms_utility.exec_ddl_statement('alter table XYZ disable constraint ABC');
I also checked if there are locked objects but there were none before disabling the constraint. And after running this it has acquired a lock on the table.
Any clues on how to tackle this problem.
Note: This table has a flashback enabled. Would this effect in anyway?
Related
The below statement consumes a huge amount of time for a table containing 70 million records.
ALTER TABLE <table-name> ENABLE CONSTRAINT <constraint-name>
Does table scan all rows while enabling the constraint.
Even though the constraint got enabled, the process just hung for more than 5 hours.
Any ideas on how this can be optimized
As guys said before, depends on constrain type it is possibility skip validate existing data by ALTER TABLE ENABLE NOVALIDATE CONSTRAINT . And check this data by some additional procedure or query.
You can find documentation about that here https://docs.oracle.com/cd/B28359_01/server.111/b28310/general005.htm#ADMIN11546
in my pl\sql process, there is an execution of "alter table exchange parition.."
on some table.
the problem is that during that operation - other users can try to access the target table.
one process that executed a select query on that table, got this error:
ORA-08103 object no longer exists.
i think that it is not the same like 'object or view doesn't exist'.
i think that 'object no longer exists' error, come when the process start ok,
and then the exchange (or other operation) come from the side and the process
can't bo done.
the chance is very low that it will happen, because the exchange is very very fast.
but for this case, there is any idea how to solve it? how to prevent this situation?
maybe a way to execute the exchange only if no-one touch the table?
thanks.
This will happen when session #2 alters the table in the table after session #1 has opened its cursor but before it is finished fetching the rows from that cursor.
I don't think there is a foolproof way to prevent the exchange from happening unless you are willing to change the code that other users are using to access the target table.
LOCK TABLE mytable IN EXCLUSIVE MODE will not wait for SELECT statements to complete, nor will it prevent new SELECT statements from starting, so acquiring an exclusive lock before attempting the ALTER TABLE will not work.
If you want to prevent the ALTER TABLE from happening at the same time as SELECTs, you need both to depend on acquiring the same lock. A relatively robust way to do that would be to use the DBMS_LOCK package to allocate a lock for that table. Call that lock "mytablelock".
Then, using DBMS_LOCK, your SELECT sessions would need to acquire "shared" locks on "mytablelock" before progressing. Your alter table sessions would need to acquire an "exclusive" lock on "mytablelock" before progressing.
This scheme would allow multiple SELECT sessions to run without interfering with each other, but it would prevent the ALTER TABLE from running while any SELECT was running.
A (much) less robust, but simpler, way to do it would be to change the SELECT statements issued from the other query into SELECT FOR UPDATES. But that's a recipe for lots of unnecessary waits and deadlock errors.
We have a table TRANSMISSIONS(ID, NAME) which behaves funny in the following ways:
The statement to add a foreign key in another table referencing TRANSMISSIONS.ID won't finish
The statement to add a column to TRANSMISSIONS won't finish
The statement to disable/drop a unique constraint won't finish
The statement to disable/drop a trigger won't finish
TRANSMISSION's primary key is ID, there is also a unique constraint on NAME - therefore there are indexes on ID and NAME. We also have a trigger which creates values for column ID using a sequence, so that INSERT statements do not need to provide a value for ID.
Besides TRANSMISSIONS, there are two more tables behaving like this. For other tables, the above-mentioned statements work fine.
The database is used in an application with Hibernate and due to an incorrect JPA configuration we produced high values for ID during a time. Note that we use the trigger only for "manual" INSERT statements and that Hibernate produces ID values itself, also using the sequence.
The first thought was that the problems were due to the high IDs but we have the problems also with tables that never had such high IDs.
Anyways we suspected that the indexes might be fragmented somehow and called ALTER INDEX TRANSMISSIONS_PK SHRINK SPACE COMPACT, which ran through but showed no effect.
Also, we wanted to call ALTER TABLE TRANSMISSIONS SHRINK SPACE COMPACT which didn't work because we needed to call first ALTER TABLE TRANSMISSIONS ENABLE ROW MOVEMENT which never finished.
We have another instance of the database which does not behave in such a funny way. So we think it might be that in the course of running the application the database got somehow into an inconsistent state.
Does someone have any suggestions what might have gone out of control/into an inconsitent state?
More hints:
There are no locks present on any objects in the database (according to information in v$lock and v$locked_object)
We tried all these statements in SQL Developer and also using SQLPlus (command-line tool).
Please anyone explain locking mode in Oracle i.e. Share, Exclusive and Update lock. I found many theories on this and according to that
Share lock : Nobody can change data,Read only purpose
Exclusive lock : Only one user/connection are allow to change the data.
Update lock : Rows are locked till user made commit/rollback.
Then, I tried shared to check how it works
SQL> lock table emp in share mode;
Table(s) Locked.
SQL> update emp set sal=sal+10;
14 rows updated.
Then, I found that, user can change data after share lock. Then, what makes it different from exclusive lock and update lock.
Another question, how Update lock and exclusive lock are different with each other, even they seems almost equivalent.
Posting explanation for future visitors, and it also gives the answer.
Shared lock
Before I begin let me first say that there are 5 types of table locks - row shared, row exclusive, shared, shared row exclusive and exclusive. And shared lock is one among these. Also, please note that there are row locks, which are different than table locks. Follow the link I have provided in end to read about all this.
A shared lock is acquired on the table specified in following statement – LOCK TABLE table IN SHARE MODE;
This lock prevents other transactions from getting “row exclusive” (this lock is used by INSERT, UPDATE and DELETE statement), “shared row exclusive” and “exclusive” table locks, otherwise everything is permitted.
So, this means that a shared lock will block other transactions from executing INSERT, UPDATE and DELETE statements on that table but will allow other transactions to update the rows using “SELECT … FOR UPDATE” statement because for this statement a “row shared” lock is required, and it is permitted when a “shared” lock is required.
Below table is a good summary of locks and what's permitted.
Since many users will follow this question so I decided to go one more step further and put my learning notes, I hope folks will be benefited from it:
Source of this information and also excellent reading about Oracle locks.
It's very well explained in the documentation: http://docs.oracle.com/cd/E11882_01/server.112/e41084/ap_locks001.htm#SQLRF55502
In your example you locked the table in shared mode. This does not prevent other sessions locking the same object in shared mode, but it does prevent them from locking it in exclusive mode so you could not drop the table (which requires an exclusive lock) while it is being updated (which has a shared lock).
I need to execute a bunch of (up to ~1000000) sql statements on an Oracle database. These statements should result in a referentially consistent state at the end, and all the statements should be rolled back if an error occurs. These statements do not come in a referential order. So if foreign key constraints are enabled, one of the statements may cause a foreign key violation even though, this violation would be fixed with a statement that would be executed later on.
I tried disabling foreign keys first and enabling them after all statements were executed. I thought I would be able to roll back when there was an actual foreign key violation. I was wrong though, I found out that every DDL statement in Oracle started with a commit, so there was no way to rollback the statements this way. Here is my script for disabling foreign keys:
begin
for i in (select constraint_name, table_name from user_constraints
where constraint_type ='R' and status = 'ENABLED')
LOOP execute immediate 'alter table '||i.table_name||' disable constraint
'||i.constraint_name||'';
end loop;
end;
After some research, I found out that it was recommended to execute DDL statements, like in this case, in an autonomous transaction. So I tried to run DDL statements in an autonomous transaction. This resulted in the following error:
ORA-00054: resource busy and acquire with NOWAIT specified
I am guessing this is because the main transaction still has DDL lock on the tables.
Am I doing something wrong here, or is there any other way to make this scenario work?
There's several potential approaches.
The first thing to consider is that whatever you do at the table level will apply to all sessions using that table. If you haven't got exclusive access to that table, you probably don't want to drop/recreate constraints, or disable/enable them.
The second thing to consider is that you probably don't want to be in a position of rolling back a million inserts/updates. Rolling back can be SLOW.
Generally I would load into a temporary table. Then do a single INSERT from the temporary table into the destination table. As a single statement, Oracle will apply all the check constraints at the end.
If you can't go through a temporary table (eg updates to existing data), before starting make the constraints deferrable initially immediate. Then, within your session,
SET CONSTRAINTS emp_job_nn, emp_salary_min DEFERRED;
You can then apply the changes and, when you commit, the constraints will be validated.
You should aquaint yourself with DML error logging as it can help identify any rows causing violations.