Enable Constraint - Peformance Impact - oracle

The below statement consumes a huge amount of time for a table containing 70 million records.
ALTER TABLE <table-name> ENABLE CONSTRAINT <constraint-name>
Does table scan all rows while enabling the constraint.
Even though the constraint got enabled, the process just hung for more than 5 hours.
Any ideas on how this can be optimized

As guys said before, depends on constrain type it is possibility skip validate existing data by ALTER TABLE ENABLE NOVALIDATE CONSTRAINT . And check this data by some additional procedure or query.
You can find documentation about that here https://docs.oracle.com/cd/B28359_01/server.111/b28310/general005.htm#ADMIN11546

Related

Why unique index AND unique constraint (multi columns)?

So a table I am looking at has a unique constraint AND a unique index over multiple columns, and the exact same columns for both.
Is there a use for this or is the unique constraint redundant?
I agree that the existence of unique constraints and unique indexes does look redundant at first. It seems like a violation of Don't Repeat Yourself, allowing for confusing differences. But there are at least two reasons both exist - management features and allowing existing duplicates.
Management Features
In theory, a logical constraint can be created without worrying about the implementation. The constraint specifies what must be true, along with some options such as deferring the constraint until a commit.
In practice, constraints have such a large performance and storage penalty that the implementation must be considered. An index is required or else a single insert would require O(n) time instead of O(log(n)). Those indexes can take up a huge amount of space; someone might want to specify where it's stored, how it's compressed, etc.
Most of the time those features aren't important and using all the index defaults is fine. But sometimes storage and performance are critical and people will want to tweak the index without caring about the constraint.
Allow Existing Duplicates
There is at least one case where a unique constraint does not have a unique index. It's possible to allow existing duplicate valuess but prevent any future duplicates by setting the constraint to NOVALIDATE and using a non-unique index.
--Create table and insert duplicate values.
create table test1(a number);
insert into test1 values(1);
insert into test1 values(1);
commit;
--Add a non-validated unique constraint, with a non-unique index.
alter table test1
add constraint test1_uq unique(a)
using index (create /* Not unique!*/ index test1_uq on test1(a)) novalidate;
--Now multiple inserts raise: ORA-00001: unique constraint (JHELLER.TEST1_UQ) violated
insert into test1 values(2);
insert into test1 values(2);
The physical index must allow duplicates, but the logical constraint knows to not allow any more duplicates. Although this is a rare feature and I'm not sure if I've ever seen it in production code.

Creating a Triggers

How do I start a trigger so that this allows nobody to be able to rent a movie if their unpaid balance exceeds 50 dollars?
What you have here is a cross-row table constraint - i.e. you can't just put a single Oracle CONSTRAINT on a column, as these can only look at data within a single row at a time.
Oracle has support for only two cross-row constraint types - uniqueness (e.g. primary keys and unique constraints) and referential integrity (foreign keys).
In your case, you'll have to hand-code the constraint yourself - and with that comes the responsibility to ensure that the constraint is not violated in the presence of multiple sessions, each of which cannot see data inserted/updated by other concurrent sessions (at least, until they commit).
A simplistic approach is to add a trigger that issues a query to count how many records conflict with the new record; but this won't work because the trigger cannot see rows that have been inserted/updated by other sessions but not committed yet; so the trigger will sometimes allow members to rent 6 videos, as long as (for example) they get two cashiers to enter the data in separate terminals.
One way to get around this problem is to put some element of serialization in - e.g. the trigger would first request a lock on the member record (e.g. with a SELECT FOR UPDATE) before it's allowed to check the rentals; that way, if a 2nd session tries to insert rentals, it will wait until the first session does a commit or rollback.
Another way around this problem is to use an aggregating Materialized View, which would be based on a query that is designed to find any rows that fail the test; the expectation is that the MV will be empty, and you put a table constraint on the MV such that if a row was ever to appear in the MV, the constraint would be violated. The effect of this is that any statement that tries to insert rows that violate the constraint will cause a constraint violation when the MV is refreshed.
Writing the query for this based on your design is left as an exercise for the reader :)
If you want to restrict something about your table data then you should have a look at Constraints and not Triggers.
Constraints are ensuring some conditions about your table data. Like your example.
Triggers are fired when some action (i.e. INSERT, UPDATE, DELETE) took place and you can do some work then as a reaction to this action.

Unique Constraint performance

When having a table with 450 million records and a unique constraint (no primary key, just constraint made out of 6 columns), how can I improve its performance while inserting 5 million rows daily.
at the moment I just disable the constraint and enable it after the loading has finished. But this takes some time.
By the way, there is no unique index supporting the constraint... it will just get super huge
In case your import procedure ensures uniquenes of new rows you can enable the constraint with NOVALIDATE, then existing data in the table are not checked.
See here:
http://docs.oracle.com/cd/B28359_01/server.111/b28286/clauses002.htm#SQLRF52204

does Unique constraint on multiple columns has performance issues -Oracle

I am using Oracle database and I have a table for customers records and want to put a Unique key constraint on multiple varchar2 columns. like
CUST_ID (Number),
CUST_Name(varchar2),
Cust_N.I.C_NO(varchar2) will make a unique key.
when inserting new record through forms 6i, if ORA-00001 error comes, user will be informed that it was a DUPLICATED record.
Please advise me if there will be any database performance issue when records in this table will exceed 50000 or more.
If this is not a good practice to avoid inserting duplicate records, then please suggest any other approach.
regards.
Unique constraints are enforced though an index. So there are additional reads involved in the enforcement process. However, the performance impact of the constraint is minimal compared to the performance impact incurred by resolving duplicate keys in the database. Not to mention the business impact of such data corruption.
Besides, 50000 rows is a toy-sized table. Seriously, you won't be able to measure the difference of an insert with and without the constraints.

Direct-Path INSERT Oracle

I am reading about Direct-Path INSERT in oracle documentation Loading Tables
It is written that :
During direct-path INSERT operations, the database appends the inserted data after existing data in the table. Data is written directly into datafiles, bypassing the buffer cache. Free space in the table is not reused, and referential integrity constraints are ignored. Direct-path INSERT can perform significantly better than conventional insert.
Can anyone explain me ,how referential integrity constraints is been ignored,According to my understanding it will load the data into the table ignoring the referential constraint .and after insert it will check for referential constraint.
If this is so ,if i use like this .
FORALL i IN v_temp.first..v_temp.last save exceptions
INSERT /*+ APPEND_VALUES */ INTO orderdata
VALUES(v_temp(i).id,v_temp(i).name);
COMMIT;
Will this will gave me correct index ,in case of any exceptions and how ?.
Sorry to ask so many questions in one ,but they are releated to each other.
How refrential constraint is been ignored
What is Free Space in table above
How it will give correct Index in case of any exceptions.
The first question should really be (Do I want/need to use direct path insert?", and the second should be "Did my query use direct path insert?"
If you need referential integrity checks, then you do not use direct path insert.
If you do not want the table to be exclusively locked for modifications, then do not use direct path insert.
If you remove data by deletion and only insert with this code, then do not use direct path insert.
One quick and easy check on whether direct path insert was used is to immediately, before committing the insert, issue a select of one row from the table. If it succeeds then direct path insert was not used -- you will receive an error message if it was because your change has to be commited before your session can read the table.
Referential Integrity is is not ignored in that statement.
See this AskTom thread for an explanation and an example:
what it seems to neglect to say in that old documentation is that....
insert /*+ append */ will ignore the append hint and use conventional
path loading when the table has referential integrity or a trigger
Free space is an in it doesn't reuse space freed up in the table by deletes, whereas a standard insert would.
I can't see anywhere where it says it will do a referential integrity check after the operation. I suspect you have to do that yourself.
erm what index?
Edited to insert.
Index as in the 3rd row to insert, I believe not necessarily anything to do with the table unless the index in the inserts happens to be the key of the table.
Check to see if it is maintaining referential integrity? Put a "bad" record in e.g. an order with a customerid that doesn't exist.
Free space.
Lets say you have a table of nchar(2) with an int primary key
e.g.
1 AA
2 AB
3 AC
So in your index on the key
1 points to 0
2 points to 4 (unicode one char = two bytes)
3 points to 8
Now you delete record with key 2, now you have
1 points to 0
3 points to 8
If you do a normal insert which reuses free space you get
1 points to 0
3 points to 8
4 points to 4
This direct insert stuff however saves time by not reusing the space so you get
1 points to 0
3 points to 8
4 points to 12
Very simplified scenario for illustrative purposes by the way...

Resources