Why dml error logging ignored in remote db - oracle

I have that table - DIM_BP in 2 databases, with the same structure and data.
Those tables have PK on some columns.
There is an insert statement with dml error logging for catch the errors during the insert.
When I run the insert command from DB1 into DB2 with dblink - I got an error of PK unique constraint and the statement fail (dml error logging ignored).
But when I run in it from DB1 into DB1 (local) - there is no error.. and the error table filled with the errors..
example:
-- truncate table error table
truncate table DWH.ERR$_DWH_CONV;
table DWH.ERR$_DWH_CONV truncated.
-- truncate table in local DB
truncate table DWH.DIM_BP;
table DWH.DIM_BP truncated.
-- (log in to the remote server and) truncate the table
truncate table DWH.DIM_BP;
table DWH.DIM_BP truncated.
-- log to the local DB again
-- first insert into the remote
-- finisehd OK and commited
INSERT /*+ monitor */ INTO DWH.DIM_BP#DWH_DEV
SELECT * FROM ...
LOG ERRORS INTO DWH.ERR$_DWH_CONV ('DWH.DIM_BP#DWH_DEV.2019-04-16 16:05:58') REJECT LIMIT UNLIMITED;
130,091 rows inserted.
commit;
-- first insert into the local
-- finisehd OK and commited
INSERT /*+ monitor */ INTO DWH.DIM_BP
SELECT * FROM ...
LOG ERRORS INTO DWH.ERR$_DWH_CONV ('DWH.DIM_BP#DWH_DEV.2019-04-16 16:05:58') REJECT LIMIT UNLIMITED;
130,091 rows inserted.
commit;
-- run the same insert again into the local table(130091)
INSERT /*+ monitor */ INTO DWH.DIM_BP
SELECT * FROM ...
LOG ERRORS INTO DWH.ERR$_DWH_CONV ('DWH.DIM_BP#DWH_DEV.2019-04-16 16:05:58') REJECT LIMIT UNLIMITED;
0 rows inserted.
COMMIT;
select count(*) from dwh.ERR$_DWH_CONV;
result: 130091
-- run the same insert again into the REMOTE table(130091)
-- RAISED AN ERROR
INSERT /*+ monitor */ INTO DWH.DIM_BP#DWH_DEV
SELECT * FROM ...
LOG ERRORS INTO DWH.ERR$_DWH_CONV ('DWH.DIM_BP#DWH_DEV.2019-04-16 16:05:58') REJECT LIMIT UNLIMITED;
SQL Error: ORA-00001: unique constraint - DWH.DIM_BP_PK
-- check again the error table
select count(*) from dwh.ERR$_DWH_CONV;
result: 130091 (no change)
this is the constraint:
CONSTRAINT "DIM_BP_PK" PRIMARY KEY... ENABLED;

Related

Materialized view fast refresh - insert and delete when updating base table

Hello fellow Stackoverflowers,
TLDR: Are MVIEWs using UPDATE or DELETE + INSERT during refresh?
some time ago I ran into an obscure thing when I was fiddling whit materialized views in Oracle. Here is my example:
2 base tables
MVIEW logs for both tables
PKs for both tables
MVIEW created as a join of these base tables
PK for MVIEW
Here is an example code:
-- ========================= DDL section =========================
/* drop tables */
drop table tko_mview_test_tb;
drop table tko_mview_test2_tb;
/* drop mview */
drop materialized view tko_mview_test_mv;
/* create tables */
create table tko_mview_test_tb as
select 1111 as id, 'test' as code, 'hello world' as data, sysdate as timestamp from dual
union
select 2222, 'test2' as code, 'foo bar', sysdate - 1 from dual;
create table tko_mview_test2_tb as
select 1000 as id, 'test' as fk, 'some string' as data, sysdate as timestamp from dual;
/* create table PKs */
alter table tko_mview_test_tb
add constraint mview_test_pk
primary key (id);
alter table tko_mview_test2_tb
add constraint mview_test2_pk
primary key (id);
/* create mview logs */
create materialized view log
on tko_mview_test_tb
with rowid, (data);
create materialized view log
on tko_mview_test2_tb
with rowid, (data);
/* create mview */
create materialized view tko_mview_test_mv
refresh fast on commit
as select a.code
, a.data
, b.data as data_b
, a.rowid as rowid_a
, b.rowid as rowid_b
from tko_mview_test_tb a
join tko_mview_test2_tb b on b.fk = a.code;
/* create mview PK */
alter table tko_mview_test_mv
add constraint mview_test3_pk
primary key (code);
According to dbms_mview.explain_mview my MVIEW if capable of fast refresh.
Well in this particular case (not in example here) the MVIEW is referenced by an FK from some other table. Because of that, I found out, that when I do a change in one of these base tables and the refresh of MVIEW is triggered I got an error message:
ORA-12048: error encountered while refreshing materialized view "ABC"
ORA-02292: integrity constraint (ABC_FK) violated
I was like What the hell??. So I started digging - I created a trigger on that MVIEW. Something like this:
/* create trigger on MVIEW */
create or replace trigger tko_test_mview_trg
after insert or update or delete
on tko_mview_test_mv
referencing old as o new as n
for each row
declare
begin
if updating then
dbms_output.put_line('update');
elsif inserting then
dbms_output.put_line('insert');
elsif deleting then
dbms_output.put_line('delete');
end if;
end tko_test_mview_trg;
/
So I was able to see what is happening. According to my trigger, every time I do UPDATE in the base table (not INSERT nor DELETE) there is actually DELETE and INSERT operation on MVIEW table.
update tko_mview_test2_tb
set data = 'some sting'
where id = 1000;
commit;
Output
delete
insert
Is this correct way how refresh of MVIEW works? There is no updates on MVIEW table when refreshing MVIEW?
Regards,
Tom
We have seen the same behavior after upgrading from oracle 12.1 to oracle 19.x
Newly created mviews seems to behave the same, a delete/insert during the refresh instead of the 'expected' update. Not sure if it is bad or wrong.....but it can be 'fixed'.
Apply patch 30781970 (don't forget _fix_control) and recreate the mview.....
Reference: Bug 30781970 - MVIEW REFRESH IS FAILING WITH ORA-1 ERROR WITH TRIGGER PRESENT ON MVIEW (Doc ID 30781970.8)

SELECT AS OF a version before column drop

1/ Does FLASHBACK and SELECT AS OF/ VERSION BETWEEN use the same source of history to fall back to ? This question is related to the second question.
2/ I am aware that FLASHBACK cannot go back before a DDL change.
My question is for SELECT AS OF, would it be able to select something before a DDL change.
Take for example
CREATE TABLE T
(col1 NUMBER, col2 NUMBER)
INSERT INTO T(col1, col2) VALUES('1', '1')
INSERT INTO T(col1, col2) VALUES('2', '2')
COMMIT;
SLEEP(15)
ALTER TABLE T DROP COLUMN col2;
SELECT * FROM T
AS OF SYSTIMESTAMP - INTERVAL '10' SECOND;
Would the select return 2 columns or 1 ?
Pardon me I do not have a database at hand to test.
Any DDL that alter the structure of a table invalidates any existing undo data for the table. So you will get the error 'ORA-01466' unable to read data - table definition has changed.
Here is a simple test
CREATE TABLE T
(col1 NUMBER, col2 NUMBER);
INSERT INTO T(col1, col2) VALUES('1', '1');
INSERT INTO T(col1, col2) VALUES('2', '2');
COMMIT;
SLEEP(15)
ALTER TABLE T DROP COLUMN col2;
SELECT * FROM T
AS OF TIMESTAMP (SYSTIMESTAMP - INTERVAL '60' SECOND);
ERRROR ORA-01466 upon executing the above select statement.
However DDL operations that alter the storage attributes of a table do no invalidate undo data so that you can still use flashback query.
1) FLASHBACK TABLE and SELECT .. AS OF use the same source, UNDO. There is also FLASHBACK DATABASE - although it uses the same mechanism it uses a separate source, flashback logs that must be optionally configured.
2) Flashback table and flashback queries can go back before a DDL change if you enable a flashback archive.
To use that feature, add a few statements to the sample code:
CREATE FLASHBACK ARCHIVE my_flashback_archive TABLESPACE users RETENTION 10 YEAR;
...
ALTER TABLE t FLASHBACK ARCHIVE my_flashback_archive;
Now this statement will return 1 column:
SELECT * FROM T;
And this statement will return 2 columns:
SELECT * FROM T AS OF SYSTIMESTAMP - INTERVAL '10' SECOND;

locking on delete from parent table instead of error

I have Oracle Database 11g Enterprise Edition Release 11.2.0.1.0.
I have parent table t1 and t2 with foreign key which references t1(col1).
What I'm wondering is why locking is there?
Please check what I've done...
session 1
SQL> create table t1(col1 char(1), primary key(col1));
Table created.
SQL> insert into t1 values('1');
1 row created.
SQL> insert into t1 values('2');
1 row created.
SQL> insert into t1 values('3');
1 row created.
SQL> insert into t1 values('4');
1 row created.
SQL> insert into t1 values('5');
1 row created.
SQL> commit;
Commit complete.
SQL> create table t2(col1 char(1), col2 char(2), foreign key(col1) references t1(col1));
Table created.
SQL> insert into t2 values('1','0');
1 row created.
SQL> commit;
Commit complete.
SQL> update t2 set col2='9'; --not committed yet!
1 row updated.
session 2
SQL> delete from t1; -- Lock happens here!!!
session 1
SQL> commit;
Commit complete.
session 2
delete from t1 -- The error occurs after I commit updating query in session 1.
*
ERROR at line 1:
ORA-02292: integrity constraint (KMS_USER.SYS_C0013643) violated - child record found
Could anyone explain me why this happens?
delete from t1; tries to lock the child table, T2. If the session is waiting on the entire table lock it can't even try to delete anything yet.
This unusual locking behavior occurs because you have an unindexed foreign key.
If you create an index, create index t2_idx on t2(col1);, you will get the ORA-02292 error instead of the lock.
The lock is coming from your line: insert into t2 values('1','0'); The lock does not happen when you delete from t1 in session 2.
Think about it. Once you insert this row in session 1, there is a reference from t2.col1 back to t1.col1. The foreign key has been validated at that point and Oracle knows that the '1' exists in t1. If session 2 could delete that row from t1, then session 2 would have an uncommitted row in t2 that had an invalid reference to t1, which doesn't make any sense.

SQL Loader like errors in a table

I have external tables. And I'd like to extract the data from those tables and insert/merge that data in other tables.
Now when a select from => insert into query or merge query runs then it is possible (and likely possible) that the data might be in a bad quality, which will result into breaking query. Say there is 000000 as date in external table which will result breaking query if I am merging data.
How can I log these errors in a table (for example) error table which will log the error, reason of error, line number and column name? Just like you see in SQL Loader logs. For example:
Record 2324: Rejected - Error on table AA_STAG_VR_01, column KS1.
ORA-01843: not a valid month
And the query shouldn't break upon rather. Rather log the error and move on like it happens in SQL Loader.
Is it possible? I tried to look around net but I wasn't unable to find anything, or maybe I simply don't know the magic words
Thanks in advance :-)
EDIT:
Ok, I was able to solve the problem (well, partly) using the following approach.
CREATE TABLE error_table (
ora_err_number$ NUMBER,
ora_err_mesg$ VARCHAR2(2000),
ora_err_rowid$ ROWID,
ora_err_optyp$ VARCHAR2(2),
ora_err_tag$ VARCHAR2(2000)
)
INSERT INTO destination_table (column)
SELECT column FROM external_table
LOG ERRORS INTO error_table REJECT LIMIT UNLIMITED
This gives me:
SELECT * FROM error_table;
----------------------------------------------------------------------------------------------------------------------------------------------------------
ORA_ERR_NUMBER$ | ORA_ERR_MESG$ | ORA_ERR_ROWID$ | ORA_ERR_OPTYP$ | ORA_ERR_TAG$ |
----------------------------------------------------------------------------------------------------------------------------------------------------------
12899 |ORA-12899: value too large for column "SYSTEM"."destination_table"."column"
So far, so good. However, I would like to know what record number (line number in external_table) has this error. Because it is possible that fist 10 records went ok but 11th record was bad.
Check out FORALL + SAVE EXCEPTIONS clause. It might help you.
15:57:02 #> conn hr/hr#vm_xe
Connected.
15:57:15 HR#vm_xe> create table err_test(unique_column number primary key);
Table created.
Elapsed: 00:00:01.51
15:57:46 HR#vm_xe> EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('err_test', 'errlog');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.46
15:59:22 HR#vm_xe> insert into err_test select mod(rownum, 2) from dual connect by rownum < 10
16:00:00 2 log errors into errlog ('test') reject limit unlimited;
2 rows created.
Elapsed: 00:00:00.87
16:00:27 HR#vm_xe> commit;
Commit complete.
Elapsed: 00:00:00.00
16:02:37 HR#vm_xe> col ora_err_mesg$ for a75
16:02:43 HR#vm_xe> col unique_column for a10
16:02:47 HR#vm_xe> select unique_column, ora_err_mesg$ from errlog;
UNIQUE_COL ORA_ERR_MESG$
---------- ---------------------------------------------------------------------------
1 ORA-00001: unique constraint (HR.SYS_C007056) violated
0 ORA-00001: unique constraint (HR.SYS_C007056) violated
1 ORA-00001: unique constraint (HR.SYS_C007056) violated
0 ORA-00001: unique constraint (HR.SYS_C007056) violated
1 ORA-00001: unique constraint (HR.SYS_C007056) violated
0 ORA-00001: unique constraint (HR.SYS_C007056) violated
1 ORA-00001: unique constraint (HR.SYS_C007056) violated
7 rows selected.
Elapsed: 00:00:00.03
Below some syntax, you have reject limit as in sqlloader, you have log files, bad files etc.
CREATE TABLE <table_name> (
<column_definitions>)
ORGANIZATION EXTERNAL
(TYPE oracle_loader
DEFAULT DIRECTORY <oracle_directory_object_name>
ACCESS PARAMETERS (
RECORDS DELIMITED BY newline
BADFILE <file_name>
DISCARDFILE <file_name>
LOGFILE <file_name>
[READSIZE <bytes>]
[SKIP <number_of_rows>
FIELDS TERMINATED BY '<terminator>'
REJECT ROWS WITH ALL NULL FIELDS
MISSING FIELD VALUES ARE NULL
(<column_name_list>))\
LOCATION ('<file_name>'))
[PARALLEL]
REJECT LIMIT <UNLIMITED | integer>;
See here some examples, and here the docs

Bulk Update from one table to another

So I tried the bulk update in order to copy values from uemte_id column in pp_terminal table to uemte_id column (null at start) in mm_chip table. These two tables have no columns in common.This is what I used:
declare
type ue_tab is table of
pp_terminal.uemte_id%type;
ue_name ue_tab;
cursor c1 is select uemte_id from pp_terminal;
begin
open c1;
fetch c1 bulk collect into ue_name;
close c1;
-- bulk insert
forall indx in ue_name.first..ue_name.last
update mm_chip set uemte_id = ue_name(indx);
end;
/
And this is the error message I get:
Error report:
ORA-00001: unique constraint (DPOWNERA.IX_AK7_MM_CHIP) violated
ORA-06512: at line 13
00001. 00000 - "unique constraint (%s.%s) violated"
*Cause: An UPDATE or INSERT statement attempted to insert a duplicate key.
For Trusted Oracle configured in DBMS MAC mode, you may see
this message if a duplicate entry exists at a different level.
*Action: Either remove the unique restriction or do not insert the key.
Do you see any obvious misstakes?
What you're trying to do is:
select a row from the first table
update every row in the second table with that value
select another row from the first table
update every row in the second table with that value
and so forth until the loop finishes
I'm guessing that's not what you really want to do. It's failing because you have a unique constraint so you're not allowed to have multiple rows in the second table with the same value.
Below is one way to update each row of one table based on the value of an arbitrary row in a second table, without reusing any rows from the second table. It would perform better if you could do it entirely in SQL, but I couldn't come up with a way to do that.
CREATE TABLE test4 AS
(SELECT LEVEL AS cola, CAST(NULL AS number) AS colb
FROM DUAL
CONNECT BY LEVEL <= 100);
CREATE TABLE test5 AS
(SELECT 100 + LEVEL AS colc
FROM DUAL
CONNECT BY LEVEL <= 99);
DECLARE
CURSOR cur_test4 IS
SELECT *
FROM test4
FOR UPDATE ;
CURSOR cur_test5 IS
SELECT * FROM test5;
r_test5 cur_test5%ROWTYPE;
BEGIN
OPEN cur_test5;
FOR r_test4 IN cur_test4 LOOP
FETCH cur_test5 INTO r_test5;
IF cur_test5%NOTFOUND THEN
EXIT;
END IF;
UPDATE test4
SET colb = r_test5.colc
WHERE CURRENT OF cur_test4;
END LOOP;
CLOSE cur_test5;
END;

Resources