Is it possible to catch the master key or the record rowID that triggered the duplication exception?
table1 have PK: col1 and Unique1: col2
e.g.
begin
insert into table1(col1, col2, col3)
values (1, 2, 3);
exception
when dup_val_on_index then
--- here, can you somehow indicate either PK or ROWID of the record that generated the exception of uniqueness?
e.g.
update table1 set
col3 = 100
where rowid = "GETROWID" or col1 = "GETPK";
end;
In "normal" code you don't use constants to insert values; you'd normally have the value in a variable so your code would look more like:
DECLARE
strVar1 TABLE1%TYPE;
nVar2 NUMBER;
nVar3 NUMBER;
begin
SELECT s1, n2, n3
INTO strVar1, nVar2, nVar3
FROM SOME_TABLE;
insert into table1(col1, col2, col3)
values (strVar1, nVar2, nVar3);
exception
when dup_val_on_index then
update table1
set col3 = 100
where col1 = strVar1;
end;
But a better idea is to avoid the exception in the first place by using a MERGE statement:
MERGE INTO TABLE1 t1
USING (SELECT S1, N2, N3
FROM SOME_TABLE) s
ON (t1.COL1 = s.S1)
WHEN MATCHED THEN
UPDATE SET COL3 = 100
WHEN NOT MATCHED THEN
INSERT (COL1, COL2, COl3)
VALUES (s.S1, s.N2, s.N3);
The LOG ERRORS INTO could help here. You have to prepare an error table before using this clause:
begin
dbms_errlog.create_error_log('table1');
end;
That will create err$_table1 table. Now run the insert using additional feature
insert into table1(col1, col2, col3) values (1, 2, 3)
log errors into err$_table1 ('some_tag_to_look_for') reject limit unlimited;
Querying the err$_table1 with
select * from err$_table1;
won't give you the rowid (it gets filled for updates and deletes only) but you'll get the exact column values causing the error and the index violated.
If that is not enough you can find the indexed columns here
select * from all_ind_columns where owner='OWNER_NAME_from_error_table' and index_name = 'Name_from_err_table';
Thus you'll know which column(s) in the destination table already has the value you are trying to insert --> this is how you'll find the rowid or whatever you need
I have two tables. if the data in table1 is more than a predefined limit (say 2), i need to copy the remaining contents of table1 to table2 and delete those same contents from table1.
I used the below query to insert the excess data from table1 to table2.
insert into table2
SELECT * FROM table1 WHERE ROWNUM < ((select count(*) from table1)-2);
Now i need the delete query to delete the above contents from table1.
Thanks in advance.
A straightforward approach would be an interim storage in a temporary table. Its content can be used to determine the data to be deleted from table1 as well as the source to feed table 2.
Assume (slightly abusing notation) to be the PK column (or that of any candidate key) of table1 - usually there'll be some key that comprises only 1 column.
create global temporary table t_interim as
( SELECT <pk> pkc FROM table1 WHERE ROWNUM < ((select count(*) from table1)-2 )
;
insert into table2
select * from table1 where <pk> IN (
select pkc from t_interim
);
delete from table1 where <pk> IN (
select pkc from t_interim
);
Alternative
If any key of table1 spans more than 1 column, use an EXISTS clause instead as follows ( denoting the i-th component of a candidate key in table1):
create global temporary table t_interim as
( SELECT <ck_1> ck1, <ck_2> ck2, ..., <ck_n> ckn FROM table1 WHERE ROWNUM < ((select count(*) from table1)-2 )
;
insert into table2
select * from table1 t
where exists (
select 1
from t_interim t_i
where t.ck_1 = t_i.ck1
and t.ck_2 = t_i.ck2
...
and t.ck_n = t_i.ckn
)
;
delete from table1 t where
where exists (
select 1
from t_interim t_i
where t.ck_1 = t_i.ck1
and t.ck_2 = t_i.ck2
...
and t.ck_n = t_i.ckn
)
;
(Technically you could try to adjust the first scheme by synthesizing a key from the components of any CK, eg. by concatenating. You run the risk of introducing ambiguities ( (a bc, ab c) -> (abc, abc) ) or run into implementation limits ( max. varchar length ) using the first method)
Note
In case the table doesn't have a PK, you can apply the technique using any candidate key of table1. There will always be one, in the extreme case it's the set of all columns.
This situation may be the right time to improve the db design and add a (synthetic) pk column to table1 ( and any other tables in the system that lack it).
I want to write a query which finds the difference between two tables and writes updates or new data into third table. My two tables have identical column names. Third table which captures changes have extra column called comment. I would like to insert the comment whether it is a new row or updated row based on the row modification.
**TABLE1 (BACKUP)**
KEY,FIRST_NAME,LAST_NAME,CITY
1,RAM,KUMAR,INDIA
2,TOM,MOODY,ENGLAND
3,MOHAMMAD,HAFEEZ,PAKISTAN
4,MONIKA,SAM,USA
5,MIKE,PALEDINO,USA
**TABLE2 (CURRENT)**
KEY,FIRST_NAME,LAST_NAME,CITY
1,RAM,KUMAR,USA
2,TOM,MOODY,ENGLAND
3,MOHAMMAD,HAFEEZ,PAKISTAN
4,MONIKA,SAM,INDIA
5,MIKE,PALEDINO,USA
6,MAHELA,JAYA,SL
**TABLE3 (DIFFERENCE FROM TABLE2 TO TABLE1)**
KEY,FIRST_NAME,LAST_NAME,CITY,COMMENT
1,RAM,KUMAR,USA,UPDATE
4,MONIKA,SAM,INDIA,UPDATE
6,MAHELA,JAYA,SL,INSERT
table scripts
DROP TABLE TABLE1;
DROP TABLE TABLE2;
DROP TABLE TABLE3;
CREATE TABLE TABLE1
(
KEY NUMBER,
FIRST_NAME VARCHAR2(100),
LAST_NAME VARCHAR2(100),
CITY VARCHAR2(50)
);
/
CREATE TABLE TABLE2
(
KEY NUMBER,
FIRST_NAME VARCHAR2(100),
LAST_NAME VARCHAR2(100),
CITY VARCHAR2(50)
);
/
CREATE TABLE TABLE3
(
KEY NUMBER,
FIRST_NAME VARCHAR2(100),
LAST_NAME VARCHAR2(100),
CITY VARCHAR2(50),
COMMENTS VARCHAR2(200)
);
/
INSERT ALL
INTO TABLE1
VALUES(1,'RAM','KUMAR','INDIA')
INTO TABLE1 VALUES(2,'TOM','MOODY','ENGLAND')
INTO TABLE1 VALUES(3,'MOHAMMAD','HAFEEZ','PAKISTAN')
INTO TABLE1 VALUES(4,'MONIKA','SAM','USA')
INTO TABLE1 VALUES(5,'MIKE','PALEDINO','USA')
SELECT 1 FROM DUAL;
/
INSERT ALL
INTO TABLE2
VALUES(1,'RAM','KUMAR','USA')
INTO TABLE2 VALUES(2,'TOM','MOODY','ENGLAND')
INTO TABLE2 VALUES(3,'MOHAMMAD','HAFEEZ','PAKISTAN')
INTO TABLE2 VALUES(4,'MONIKA','SAM','INDIA')
INTO TABLE2 VALUES(5,'MIKE','PALEDINO','USA')
INTO TABLE2 VALUES(6,'MAHELA','JAYA','SL')
SELECT 1 FROM DUAL;
I was using the merge statement to accomplish the same. but i have hit a roadblock in merge statement , it's rhrowing an error "SQL Error: ORA-00905: missing keyword
00905. 00000 - "missing keyword"" I dont understand where is the error. please help
INSERT INTO TABLE3
SELECT KEY,FIRST_NAME,LAST_NAME,CITY,NULL AS COMMENTS FROM TABLE2
MINUS
SELECT KEY,FIRST_NAME,LAST_NAME,CITY,NULL AS COMMENTS FROM TABLE1
;
MERGE INTO TABLE3 A
USING TABLE1 B
ON (A.KEY=B.KEY)
WHEN MATCHED THEN
UPDATE SET A.COMMENTS='UPDATED'
WHEN NOT MATCHED THEN
UPDATE SET A.COMMENTS='INSERTED';
There is no such WHEN NOT MATCHED THEN UPDATE clause, you should use WHEN NOT MATCHED THEN INSERT. Refer to MERGE for details.
A few assumptions made about the data:
An INSERT event will be a record identified by its key in table2 (current data) that does not have a matching key in the original back-up table: table1.
An UPDATE event is a field that exists in both table1 and table2 for the same KEY but is not the same.
Records which did not change between tables are not to be recorded in table3.
Example Query: Check for Updates
SELECT UPD_QUERY.NEW_CITY, 'UPDATED' as COMMENTS
FROM (SELECT CASE WHEN REPLACE(CURR.CITY, BKUP.CITY,'') IS NOT NULL THEN CURR.CITY
ELSE NULL END as NEW_CITY
FROM table1 BKUP, table2 CURR
WHERE BKUP.KEY = CURR.KEY) UPD_QUERY
WHERE UPD_QUERY.NEW_CITY is NOT NULL;
You can repeat this comparison method for the other fields:
SELECT UPD_QUERY.*
FROM (SELECT CURR.KEY,
CASE WHEN REPLACE(CURR.FIRST_NAME, BKUP.FIRST_NAME,'') IS NOT NULL
THEN CURR.FIRST_NAME
ELSE NULL END as FIRST_NAME,
CASE WHEN REPLACE(CURR.LAST_NAME, BKUP.LAST_NAME,'') IS NOT NULL
THEN CURR.LAST_NAME
ELSE NULL END as LAST_NAME,
CASE WHEN REPLACE(CURR.CITY, BKUP.CITY,'') IS NOT NULL
THEN CURR.CITY
ELSE NULL END as CITY
FROM table1 BKUP, table2 CURR
WHERE BKUP.KEY = CURR.KEY) UPD_QUERY
WHERE COALESCE(UPD_QUERY.FIRST_NAME, UPD_QUERY.LAST_NAME, UPD_QUERY.CITY)
is NOT NULL;
NOTE: This could get unwieldy very quickly if the number of columns compared are many. Since the target table design (table3) requires not only identification of a change, but the field and its new value are also recorded.
Example Query: Look for Newly Added Records
SELECT CURR.*, 'INSERTED' as COMMENTS
FROM table2 CURR, table1 BKUP
WHERE CURR.KEY = BKUP.KEY(+)
AND BKUP.KEY is NULL;
Basically MERGE forces the operation: MATCHED=UPDATE (or DELETE), NOT MATCHED = INSERT. It's in the docs.
You can do what you want but you need two insert statements with different set operators,
For UPDATED:
Insert into table3
table1 INTERSECT table2
For INSERTED:
Insert into table3
table2 MINUS table1
I have a table which doesnot have any unique key column and I want to perform bulk update using self join.
Update
(
select t1.Col1 col1, t2.col1 col2
from table t1
inner join table t2 on <join condtn>
where <condtn>
)
Set col1 = col2
but as the table does not have unique key column, it gives error:
ORA-01779: cannot modify a column which maps to a non key-preserved
table.
Is there any solution other than adding unique constraint :)
You should be able to refactor the query to do a correlated update
UPDATE table t1
SET col1 = (SELECT col1
FROM table t2
WHERE t1.<<some column>> = t2.<<some column>>)
WHERE EXISTS( SELECT 1
FROM table t2
WHERE t1.<<some column>> = t2.<<some column>>)
I need to update a query so that it checks that a duplicate entry does not exist before insertion. In MySQL I can just use INSERT IGNORE so that if a duplicate record is found it just skips the insert, but I can't seem to find an equivalent option for Oracle. Any suggestions?
If you're on 11g you can use the hint IGNORE_ROW_ON_DUPKEY_INDEX:
SQL> create table my_table(a number, constraint my_table_pk primary key (a));
Table created.
SQL> insert /*+ ignore_row_on_dupkey_index(my_table, my_table_pk) */
2 into my_table
3 select 1 from dual
4 union all
5 select 1 from dual;
1 row created.
Check out the MERGE statement. This should do what you want - it's the WHEN NOT MATCHED clause that will do this.
Do to Oracle's lack of support for a true VALUES() clause the syntax for a single record with fixed values is pretty clumsy though:
MERGE INTO your_table yt
USING (
SELECT 42 as the_pk_value,
'some_value' as some_column
FROM dual
) t on (yt.pk = t.the_pke_value)
WHEN NOT MATCHED THEN
INSERT (pk, the_column)
VALUES (t.the_pk_value, t.some_column);
A different approach (if you are e.g. doing bulk loading from a different table) is to use the "Error logging" facility of Oracle. The statement would look like this:
INSERT INTO your_table (col1, col2, col3)
SELECT c1, c2, c3
FROM staging_table
LOG ERRORS INTO errlog ('some comment') REJECT LIMIT UNLIMITED;
Afterwards all rows that would have thrown an error are available in the table errlog. You need to create that errlog table (or whatever name you choose) manually before running the insert using DBMS_ERRLOG.CREATE_ERROR_LOG.
See the manual for details
I don't think there is but to save time you can attempt the insert and ignore the inevitable error:
begin
insert into table_a( col1, col2, col3 )
values ( 1, 2, 3 );
exception when dup_val_on_index then
null;
end;
/
This will only ignore exceptions raised specifically by duplicate primary key or unique key constraints; everything else will be raised as normal.
If you don't want to do this then you have to select from the table first, which isn't really that efficient.
Another variant
Insert into my_table (student_id, group_id)
select distinct p.studentid, g.groupid
from person p, group g
where NOT EXISTS (select 1
from my_table a
where a.student_id = p.studentid
and a.group_id = g.groupid)
or you could do
Insert into my_table (student_id, group_id)
select distinct p.studentid, g.groupid
from person p, group g
MINUS
select student_id, group_id
from my_table
A simple solution
insert into t1
select from t2
where not exists
(select 1 from t1 where t1.id= t2.id)
This one isn't mine, but came in really handy when using sqlloader:
create a view that points to your table:
CREATE OR REPLACE VIEW test_view
AS SELECT * FROM test_tab
create the trigger:
CREATE OR REPLACE TRIGGER test_trig
INSTEAD OF INSERT ON test_view
FOR EACH ROW
BEGIN
INSERT INTO test_tab VALUES
(:NEW.id, :NEW.name);
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN NULL;
END test_trig;
and in the ctl file, insert into the view instead:
OPTIONS(ERRORS=0)
LOAD DATA
INFILE 'file_with_duplicates.csv'
INTO TABLE test_view
FIELDS TERMINATED BY ','
(id, field1)
How about simply adding an index with whatever fields you need to check for dupes on and say it must be unique? Saves a read check.
yet another "where not exists"-variant using dual...
insert into t1(id, unique_name)
select t1_seq.nextval, 'Franz-Xaver' from dual
where not exists (select 1 from t1 where unique_name = 'Franz-Xaver');