Add a primary key column to an old table - oracle

So I have a table with some 50+ rows. And currently this tables doesnot have any primary key/ID column in it. Now if I have to add a primary key column, its not allowing me to because already data are present in the table and there is as such no unique column or combination of columns. Can anyone suggest me how to add a primary column to an existing table with data in it.

(From 12.1) You can add a new auto-incremented surrogate key to a table with either:
alter table t
add ( t_id integer generated by default as identity );
Or
create sequence s;
alter table t
add ( t_id integer default s.nextval );
These set the value for all the existing rows. So may take a while on large tables!
You should also look to add a unique constraint on the business keys too though. To do that, take the steps Marmite Bomber suggests.

In your case when the table due to missing PK definition suffers some duplicated records, you may do a stepwise recovery.
In the first step you disables the creation of the new duplicated rows.
Let's assume your PK candidate columns are col1, col2 such as in the example below:
CREATE TABLE test_pk as
SELECT 'A' col1, 1 col2 FROM dual UNION ALL
SELECT 'A' col1, 2 col2 FROM dual UNION ALL
SELECT 'B' col1, 1 col2 FROM dual UNION ALL
SELECT 'B' col1, 1 col2 FROM dual;
You can not define the PK because of the existing duplications
ALTER table test_pk ADD CONSTRAINT my_pk UNIQUE (col1, col2);
-- ORA-02299: cannot validate (xxx.MY_PK) - duplicate keys found
But you can crete an index on the PK columns and set up a constraint in the state ENABLE NOVALIDATE.
This will tolerate existing duplicates, but reject the new once.
CREATE INDEX my_pk_idx ON test_pk(col1, col2);
ALTER TABLE test_pk
ADD CONSTRAINT my_pk UNIQUE (col1,col2) USING INDEX my_pk_idx
ENABLE NOVALIDATE;
Now you may insert new unique rows ...
INSERT INTO test_pk (col1, col2) VALUES ('A', 3);
-- OK
... but you can't create new duplications:
INSERT INTO test_pk (col1, col2) VALUES ('A', 1);
-- ORA-00001: unique constraint (xxx.MY_PK) violated
Later in the second step you may decide to clenup the table and VALIDATE the constraint, which will make a perfect primary key as expected:
-- cleanup
DELETE FROM TEST_PK
WHERE col1 = 'B' AND col2 = 1 AND rownum = 1;
ALTER TABLE test_pk MODIFY CONSTRAINT my_pk ENABLE VALIDATE;

Related

Convert MERGE to UPDATE

I have the following MERGE statement.Is there a way to convert this into an update statement without using MERGE?
MERGE INTO tab1
USING (SELECT tab1.col1, tab2.col2
FROM tab1, tab2
WHERE tab1.col1 = tab2.col1) tab3
ON (tab1.col1 = tab3.col1)
WHEN MATCHED THEN UPDATE SET col2 = tab3.col2
What you are asking about is called "update through join", and contrary to a widely held belief, it is possible in Oracle. But there is a catch.
Obviously, the update - no matter how you attempt to perform it - is not well defined unless column col1 is unique in table tab2. That column is used for lookup in the update process; if its values are not unique, the update will be ambiguous. I ignore here idiotic retorts such as "uniqueness is needed only for those values also found in tab1.col1", or "there is no ambiguity as long as all values in tab2.col2 are equal when the corresponding values in tab2.col1 are equal".
The "catch" is this. The uniqueness of tab2.col1 may be a matter of data (you know it when you inspect the data), or a matter of metadata (there is a unique constraint, or a unique index, or a PK constraint, etc., on tab2.col1, which the parser can inspect without ever looking at the actual data).
merge will work even when uniqueness is only known by inspecting the data. It will still throw an error if uniqueness is violated - but that will be a runtime error (only after the data in tab2 is accessed from disk). By contrast, updating through a join requires the same uniqueness to be known ahead of time, through the metadata (or in other ways: for example if the second rowset - not a table but the table-like result of a query - is the result of an aggregation grouping on the join column; then the uniqueness is guaranteed by the definition of "aggregation").
Here is a brief example to show the difference.
Test data:
create table tab1 (col1 number, col2 number);
insert into tab1 (col1, col2) values (1, 3);
create table tab2 (col1 number, col2 number);
insert into tab2 (col1, col2) values (1, 6);
commit;
merge statement (with check at the end):
merge into tab1
using(
select tab1.col1,
tab2.col2
from tab1,tab2
where tab1.col1 = tab2.col1) tab3
on(tab1.col1 = tab3.col1)
when matched then
update
set col2 = tab3.col2;
1 row merged.
select * from tab1;
COL1 COL2
---------- ----------
1 6
Now let's restore table tab1 to its original data for the next test(s):
rollback;
select * from tab1;
COL1 COL2
---------- ----------
1 3
Update through join - with no uniqueness guaranteed in the metadata (will result in error):
update
( select t1.col2 as t1_c2, t2.col2 as t2_c2
from tab1 t1 join tab2 t2 on t1.col1 = t2.col1
)
set t1_c2 = t2_c2;
Error report -
SQL Error: ORA-01779: cannot modify a column which maps to a non key-preserved table
01779. 00000 - "cannot modify a column which maps to a non key-preserved table"
*Cause: An attempt was made to insert or update columns of a join view which
map to a non-key-preserved table.
*Action: Modify the underlying base tables directly.
Now let's add a unique constraint on the lookup column:
alter table tab2 modify (col1 unique);
Table TAB2 altered.
and try the update again (with the same update statement), plus verification:
update
( select t1.col2 as t1_c2, t2.col2 as t2_c2
from tab1 t1 join tab2 t2 on t1.col1 = t2.col1
)
set t1_c2 = t2_c2;
1 row updated.
select * from tab1;
COL1 COL2
---------- ----------
1 6
So - you can do it, if you use the correct syntax (as I have shown here) AND - very important - you have a unique or PK constraint or a unique index on column tab2.col1.

Create Unique Constraint over function based index with existing duplicate values

I have a table with wrong data and I'd like to prevent new wrong data from beeing inserted while I fix data and find out what process or where is the sentence making this happen.
I first made a UQ constraint over the columns that shouldn't be duplicated, but this gets me into another problem: I need to apply uniqueness only when all the columns have value, if there are nulls I need duplicate records over these columns. Something like this:
CREATE TABLE MYTAB (COL1 NUMBER, COL2 NUMBER, COL3 NUMBER, COL4 NUMBER); --EXAMPLE TABLE. I NEED NO DUPS OVER (COL1, COL3, COL4)
INSERT INTO MYTAB VALUES (1, 1, 1, 1); --OK
INSERT INTO MYTAB VALUES (1, 2, 1, 1); -- NOOK
INSERT INTO MYTAB VALUES (1, 3, NULL, NULL); --OK
INSERT INTO MYTAB VALUES (1, 4, NULL, NULL); --OK
If I create a constraint like this:
ALTER TABLE MYTAB
ADD CONSTRAINT U_CONSTRAINT UNIQUE (COL1, COL3, COL4) NOVALIDATE;
Last insert will crash.
I've tried with
CREATE UNIQUE INDEX FN_UIX_MYTAB
ON MYTAB (CASE WHEN COL2 IS NOT NULL THEN COL1 ELSE null END,
CASE WHEN COL2 IS NOT NULL THEN COL3 ELSE null END
CASE WHEN COL2 IS NOT NULL THEN COL4 ELSE null END) ;
But the create crashes because table has duplicate data. I'll need to create this index without validating existing data, wich means index will be applied only to new records inserted.
I've tried also with:
CREATE INDEX FN_IX_MYTAB
ON MYTAB (CASE WHEN COL2 IS NOT NULL THEN COL1 ELSE null END,
CASE WHEN COL2 IS NOT NULL THEN COL3 ELSE null END,
CASE WHEN COL2 IS NOT NULL THEN COL4 ELSE null END) ;
ALTER TABLE MYTAB
ADD CONSTRAINT FN_UIX_MYTAB UNIQUE (COL1, COL3, COL4) USING INDEX FN_IX_MYTAB NOVALIDATE;
But this gives me error:
ORA-14196: Specified index cannot be used to enforce the constraint.
Is there a way to do what I've explained, or should I prevent wrong inserts in another way while I look for the origin of the problem? Any advice will be appreciated also.
Here's one possible approach. Create a materialized view, with refresh on commit (preferably fast refresh, if the circumstances permit; in this case, they should). The MV would be something like
create materialized view mymv
refresh fast on commit
as
select col1, col3, col4
from mytab
where col1 is not null and col3 is not null and col4 is not null
;
And then put a unique constraint on (col1, col3, col4) on the MV.

Validate ALL the constraints when inserting a record in Oracle [duplicate]

I have a table in Oracle with several constraints. When I insert a new record and not all constraints are valid, then Oracle raise only the "first" error. How to get all violations of my record?
CREATE TABLE A_TABLE_TEST (
COL_1 NUMBER NOT NULL,
COL_2 NUMBER NOT NULL,
COL_3 NUMBER NOT NULL,
COL_4 NUMBER NOT NULL
);
INSERT INTO A_TABLE_TEST values (1,null,null,2);
ORA-01400: cannot insert NULL into ("USER_4_8483C"."A_TABLE_TEST"."COL_2")
I would like to get something like this:
Column COL_2: cannot insert NULL
Column COL_3: cannot insert NULL
This would be also sufficient:
Column COL_2: not valid
Column COL_3: not valid
Of course I could write a trigger and check each column individually, but I like to prefer constraints rather than triggers, they are easier to maintain and don't require manually written code.
Any idea?
There no straightforward way to report all possible constraint violations. Because when Oracle stumble on first violation of a constraint, no further evaluation is possible, statement fails, unless that constraint is deferred one or the log errors clause has been included in the DML statement. But it should be noted that log errors clause won't be able to catch all possible constraint violations, just records first one.
As one of the possible ways is to:
create exceptions table. It can be done by executing ora_home/rdbms/admin/utlexpt.sql script. The table's structure is pretty simple;
disable all table constraints;
execute DMLs;
enable all constraints with exceptions into <<exception table name>> clause. If you executed utlexpt.sql script, the name of the table exceptions are going to be stored would be exceptions.
Test table:
create table t1(
col1 number not null,
col2 number not null,
col3 number not null,
col4 number not null
);
Try to execute an insert statement:
insert into t1(col1, col2, col3, col4)
values(1, null, 2, null);
Error report -
SQL Error: ORA-01400: cannot insert NULL into ("HR"."T1"."COL2")
Disable all table's constraints:
alter table T1 disable constraint SYS_C009951;
alter table T1 disable constraint SYS_C009950;
alter table T1 disable constraint SYS_C009953;
alter table T1 disable constraint SYS_C009952;
Try to execute the previously failed insert statement again:
insert into t1(col1, col2, col3, col4)
values(1, null, 2, null);
1 rows inserted.
commit;
Now, enable table's constraints and store exceptions, if there are any, in the exceptions table:
alter table T1 enable constraint SYS_C009951 exceptions into exceptions;
alter table T1 enable constraint SYS_C009950 exceptions into exceptions;
alter table T1 enable constraint SYS_C009953 exceptions into exceptions;
alter table T1 enable constraint SYS_C009952 exceptions into exceptions;
Check the exceptions table:
column row_id format a30;
column owner format a7;
column table_name format a10;
column constraint format a12;
select *
from exceptions
ROW_ID OWNER TABLE_NAME CONSTRAINT
------------------------------ ------- ------- ------------
AAAWmUAAJAAAF6WAAA HR T1 SYS_C009951
AAAWmUAAJAAAF6WAAA HR T1 SYS_C009953
Two constraints have been violated. To find out column names, simply refer to user_cons_columns data dictionary view:
column table_name format a10;
column column_name format a7;
column row_id format a20;
select e.table_name
, t.COLUMN_NAME
, e.ROW_ID
from user_cons_columns t
join exceptions e
on (e.constraint = t.constraint_name)
TABLE_NAME COLUMN_NAME ROW_ID
---------- ---------- --------------------
T1 COL2 AAAWmUAAJAAAF6WAAA
T1 COL4 AAAWmUAAJAAAF6WAAA
The above query gives us column names, and rowids of problematic records. Having rowids at hand, there should be no problem to find those records that cause constraint violation, fix them, and re-enable constraints once again.
Here is the script that has been used to generate alter table statements for enabling and disabling constraints:
column cons_disable format a50
column cons_enable format a72
select 'alter table ' || t.table_name || ' disable constraint '||
t.constraint_name || ';' as cons_disable
, 'alter table ' || t.table_name || ' enable constraint '||
t.constraint_name || ' exceptions into exceptions;' as cons_enable
from user_constraints t
where t.table_name = 'T1'
order by t.constraint_type
You would have to implement a before-insert trigger to loop through all the conditions that you care about.
Think about the situation from the database's perspective. When you do an insert, the database can basically do two things: complete the insert successfully or fail for some reason (typically a constraint violation).
The database wants to proceed as quickly as possibly and not do unnecessary work. Once it has found the first complaint violation, it knows that the record is not going into the database. So, the engine wisely returns an error and stops checking further constraints. There is no reason for the engine to get the full list of violations.
In the meantime I found a lean solution using deferred constraints:
CREATE TABLE A_TABLE_TEST (
COL_1 NUMBER NOT NULL DEFERRABLE INITIALLY DEFERRED,
COL_2 NUMBER NOT NULL DEFERRABLE INITIALLY DEFERRED,
COL_3 NUMBER NOT NULL DEFERRABLE INITIALLY DEFERRED,
COL_4 NUMBER NOT NULL DEFERRABLE INITIALLY DEFERRED
);
INSERT INTO A_TABLE_TEST values (1,null,null,2);
DECLARE
CHECK_CONSTRAINT_VIOLATED EXCEPTION;
PRAGMA EXCEPTION_INIT(CHECK_CONSTRAINT_VIOLATED, -2290);
REF_CONSTRAINT_VIOLATED EXCEPTION;
PRAGMA EXCEPTION_INIT(REF_CONSTRAINT_VIOLATED , -2292);
CURSOR CheckConstraints IS
SELECT TABLE_NAME, CONSTRAINT_NAME, COLUMN_NAME
FROM USER_CONSTRAINTS
JOIN USER_CONS_COLUMNS USING (TABLE_NAME, CONSTRAINT_NAME)
WHERE TABLE_NAME = 'A_TABLE_TEST'
AND DEFERRED = 'DEFERRED'
AND STATUS = 'ENABLED';
BEGIN
FOR aCon IN CheckConstraints LOOP
BEGIN
EXECUTE IMMEDIATE 'SET CONSTRAINT '||aCon.CONSTRAINT_NAME||' IMMEDIATE';
EXCEPTION
WHEN CHECK_CONSTRAINT_VIOLATED OR REF_CONSTRAINT_VIOLATED THEN
DBMS_OUTPUT.PUT_LINE('Constraint '||aCon.CONSTRAINT_NAME||' at Column '||aCon.COLUMN_NAME||' violated');
END;
END LOOP;
END;
It works with any check constraint (not only NOT NULL). Checking FOREIGN KEY Constraint should work as well.
Add/Modify/Delete of constraints does not require any further maintenance.

How to remove constraint based on columns from oracle database?

Is there is a way to remove a constraint (unique index) based on columns name?
What I would like to do is to remove a constraint where columna name is name, and name_type.
ALTER TABLE MY_TABLE DROP CONSTRAINT NAME_OF_CONSTRAINT;
I don't have a name so I would like to do it this way...
ALTER TABLE MY_TABLE DROP CONSTRAINT **WHERE COLUMN = col1 AND column = col2**
Any syntax to do something like this on a constraint.
I didn't think this was possible with a single statement, but it turns out it is, as shown in the examples in the documentation:
ALTER TABLE MY_TABLE DROP UNIQUE(col1, col2);
A complete example:
ALTER TABLE MY_TABLE ADD UNIQUE (col1, col2);
Table my_table altered.
SELECT CONSTRAINT_NAME, INDEX_NAME
FROM USER_CONSTRAINTS
WHERE TABLE_NAME = 'MY_TABLE';
CONSTRAINT_NAME INDEX_NAME
------------------------------ ------------------------------
SYS_C0092455 SYS_C0092455
ALTER TABLE MY_TABLE DROP UNIQUE(col1, col2);
Table my_table altered.
SELECT CONSTRAINT_NAME, INDEX_NAME
FROM USER_CONSTRAINTS
WHERE TABLE_NAME = 'MY_TABLE';
no rows selected
An alternative approach is to query the USER_CONSTRAINTS and USER_CONS_COLUMNS views to find the matching constraint name - presumably system-generated or you would already know it - and then use that name. If you need to do this as a script then you could query in a PL/SQL block, and plug the found constraint name into a dynamic ALTER TABLE statement.
Do a SELECT * FROM USER_CONS_COLUMNS or ALL_CONS_COLUMNS. This will give you the constraint name for the owner, table and column combination.
Out of the multiple rows returned for a column name, use the constraint names as necessary in the ALTER TABLE ... DROP CONSTRAINT... syntax.
Use dynamic sql to do it for all rows in a loop if you are absolutely sure that you can drop all of them.
This will give you an extra layer of protection so that you don't accidentally drop a constraint that was needed.

Copy or show the primary key value into another table as foreign key APEX

I have two tables A and B .i want to copy or show the primary key column value from table A in the foreign key column in table B respectively .is there any method kindly help me.
Regards,
You can populate your primary key value while populating table B or by using a trigger when you are populating table A.
CREATE TABLE t1 (id1 NUMBER, dt DATE);
ALTER TABLE t1 ADD (
CONSTRAINT t1_pk
PRIMARY KEY
(id1));
CREATE TABLE t2 (id2 NUMBER, id1 NUMBER, dt2 DATE);
ALTER TABLE t2 ADD (
CONSTRAINT t2_pk
PRIMARY KEY
(id2));
ALTER TABLE t2
ADD CONSTRAINT t2_r01
FOREIGN KEY (id2)
REFERENCES t1 (id1);
First Approach, by this way you could populate second table when you are inserting values.
INSERT INTO t1
VALUES (1, SYSDATE
);
INSERT INTO t2
VALUES (1, 1, SYSDATE
);
With trigger, so when values are inserted into first table second tables values are populated using a trigger. So primary key value of first table is being inserted into foreign key of table 2.
CREATE OR REPLACE TRIGGER my_trigger
AFTER INSERT
ON t1
FOR EACH ROW
BEGIN
INSERT INTO t2
VALUES (1, :new.id1, SYSDATE
);
EXCEPTION
WHEN NO_DATA_FOUND
THEN
DBMS_OUTPUT.put_line (TO_CHAR (SQLERRM (-20299)));
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line (TO_CHAR (SQLERRM (-20298)));
END;

Resources