Finding last number generated by a sequence before it got deleted - oracle

We found that there is a sequence "_SEQUENCE" is not present in one of our client's UAT instance. We don't how it got deleted and when. It is a very crucial sequence because the numbers generated by this sequence are used as unique column values in many tables across DB. In other words, no two columns (of specific column types) in any two tables in the DB will have the same number as value. On few of these columns, we have unique index also.
We can create the sequence again but we don't know what should be the initial value of the sequence because we don't know what was the last number the old sequence generated. If we set a wrong number as the initial value and by chance, if it generates the same number for the same column of a table which is already present, we may end up getting unique key violation exception.
We can set the initial value to a very big number but that is the last option. Now;
Is it possible to find the last number the sequence "_SEQUENCE" generated before it got deleted?
Is it possible to find which process deleted the sequence "_SEQUENCE" and when?

The flashback operation is not available for a sequence, while it's available for tables. A mechanishm might be produced by the contribution of a DDL trigger mostly created in SYS or SYSTEM database users. As an example consider this procedure :
create or replace procedure pr_ddl_oper as
v_oty varchar2(75) := ora_dict_obj_type;
v_don varchar2(75) := ora_dict_obj_name;
v_evt varchar2(75) := ora_sysevent;
v_olu varchar2(75) := nvl(ora_login_user,'Unknown Schema');
v_sql ora_name_list_t;
v_stm clob;
v_sct owa.vc_arr;
n pls_integer;
n_max pls_integer := 10000; -- max number of rows for CLOB object
-- to hold object's source.
begin
v_sct(1) := 'SESSIONID';
v_sct(2) := 'IP_ADDRESS';
v_sct(3) := 'TERMINAL';
v_sct(4) := 'OS_USER';
v_sct(5) := 'AUTHENTICATION_TYPE';
v_sct(6) := 'CLIENT_INFO';
v_sct(7) := 'MODULE';
for i in 1..7
loop
v_sct(i) := sys_context('USERENV',v_sct(i));
end loop;
select decode(v_sct(1),0,null,v_sct(1)),decode(upper(v_sct(3)),
'UNKNOWN',null,v_sct(3))
into v_sct(1),v_sct(3)
from dual;
n := ora_sql_txt( v_sql );
if n > n_max then
n := n_max;
end if;
for i in 1..n
loop
v_stm := v_stm || v_sql(i);
end loop;
insert into log_ddl_oper(event_time,usr,evnt,stmt,sessionid,ip,terminal,os_user,
auth_type,object_type,object_name,client_info,module_info)
values(sysdate,v_olu,v_evt,v_stm,v_sct(1),v_sct(2),v_sct(3),v_sct(4),v_sct(5),
v_oty,v_don,v_sct(6),v_sct(7));
end;
which could be called by this trigger as to be a future reference :
--| Compiling this trigger, especially for Production Systems, should be handled with care |
create or replace trigger system.trg_admin_ddl before ddl on database
declare
begin
pr_ddl_oper;
end;
By connecting SYS user you can query the last_number column for your dropped Sequence :
select last_number
from dba_sequences
as of timestamp to_timestamp('2018-11-27 23:50:17', 'YYYY-MM-DD HH24:MI:SS') s
where s.sequence_name = 'MYSEQ';
where the timestamp value could be detected by
select l.event_time
--> returns to_timestamp('2018-11-27 23:50:17', 'YYYY-MM-DD HH24:MI:SS')
--> to use for the above SQL Select statement
from log_ddl_oper l
where l.object_name = 'MYSEQ'
and l.evnt = 'DROP'

Related

WHILE & FOR LOOP

I have question while processing records in two different looping method i.e WHILE & FOR LOOP please refer to following codes
DECLARE
TYPE rc_emp_type IS RECORD ( empno NUMBER,ename VARCHAR(30),sal NUMBER, comm NUMBER);
TYPE rc_emp_tab IS TABLE OF rc_emp_type;
l_emp_rec rc_emp_tab := rc_emp_tab();
TYPE rc_emp_calc_type IS RECORD ( empno NUMBER,
ename VARCHAR(30),
sal NUMBER,
comm NUMBER,
new_sal NUMBER);
TYPE rc_emp_calc_tab IS TABLE OF rc_emp_calc_type;
l_emp_calc_rec rc_emp_calc_tab := rc_emp_calc_tab();
l_emp_fcalc_rec rc_emp_calc_tab := rc_emp_calc_tab();
l_idx NUMBER;
l_start_time TIMESTAMP;
l_end_time TIMESTAMP;
l_exe_time TIMESTAMP;
BEGIN
SELECT empno,ename,sal,comm
BULK COLLECT INTO l_emp_rec
FROM emp;
l_idx := l_emp_rec.FIRST;
WHILE l_idx IS NOT NULL
LOOP
l_emp_calc_rec.EXTEND;
l_emp_calc_rec(l_emp_calc_rec.LAST).empno := l_emp_rec(l_idx).empno;
l_emp_calc_rec(l_emp_calc_rec.LAST).ename := l_emp_rec(l_idx).ename;
l_emp_calc_rec(l_emp_calc_rec.LAST).sal := l_emp_rec(l_idx).sal;
l_emp_calc_rec(l_emp_calc_rec.LAST).comm := l_emp_rec(l_idx).comm;
l_emp_calc_rec(l_emp_calc_rec.LAST).new_sal := NVL(l_emp_rec(l_idx).sal,0) + NVL(l_emp_rec(l_idx).comm,0);
l_idx := l_emp_rec.NEXT(l_idx);
END LOOP;
FOR l_idx IN l_emp_rec.FIRST .. l_emp_rec.LAST
LOOP
l_emp_fcalc_rec.EXTEND;
l_emp_fcalc_rec(l_emp_fcalc_rec.LAST).empno := l_emp_rec(l_idx).empno;
l_emp_fcalc_rec(l_emp_fcalc_rec.LAST).ename := l_emp_rec(l_idx).ename;
l_emp_fcalc_rec(l_emp_fcalc_rec.LAST).sal := l_emp_rec(l_idx).sal;
l_emp_fcalc_rec(l_emp_fcalc_rec.LAST).comm := l_emp_rec(l_idx).comm;
l_emp_fcalc_rec(l_emp_fcalc_rec.LAST).new_sal := NVL(l_emp_rec(l_idx).sal,0) + NVL(l_emp_rec(l_idx).comm,0);
END LOOP;
END;
Out of these two above procedure which is the efficient way of looping
If you know your collection will be densely-filled, as is the case with a collection filled by BULK COLLECT, I suggest you use a numeric FOR loop. That assumes densely-filled and therefore is most appropriate in that context.
Whenever you are not 100% certain that your collection is densely-filled, you should use a WHILE loop and the FIRST-NEXT or LAST-PRIOR methods to iterate through the collection.
You might argue that you might as well just use the WHILE loop all the time. Performance will be fine, memory consumption is not different....BUT: you might "hide" an error this way. If the collection is supposed to be dense, but it is not not, you will never know.
Finally, there is one way in which the WHILE loop could be a better performer than a FOR loop: if your collection is very sparse (eg, elements populated only in index values -1M, 0, 1M, 2M, 3M, etc.), the FOR loop will raise lots of NO_DATA_FOUND exceptions. Handling and continuing for all those exceptions will make loop execution very slow.
Out of these two above procedure which is the efficient way of looping
While dealing with collection,For Loop sometimes throws error when the collection is Sparse. In that case its beneficial to use WHILE LOOP. Both looping mechanism is equal in performance.
Sparse:- A collection is sparse if there is at least one index value between the lowest and highest defined index values that is not defined. For example, a sparse collection has an element assigned to index value 1 and another to index value 10 but nothing in between. The opposite of a sparse collection is a dense one.
Use a numeric FOR loop when
Your collection is densely filled (every index value between the lowest and the highest is defined)
You want to scan the entire collection, not terminating your scan if some condition is met
Conversely, use a WHILE loop when
Your collection may be sparse
You might terminate the loop before you have iterated through all the elements in the collection
As #XING points the difference is not in how efficient they are, but in what happens with sparse collections. Your example does not face this issue as both are built with bulk collect so there are no gaps in the index values. But this is NOT always the case. The following demo shows the difference between them.
declare
cursor c_numbers is
select level+23 num -- 23 has no particulat significence
from dual
connect by level <= 5; -- nor does 5
type base_set is table of c_numbers%rowtype;
while_set base_set;
for_set base_set;
while_index integer; -- need to define while loop index
begin
-- populate both while and for arrays.
open c_numbers;
fetch c_numbers bulk collect into while_set;
close c_numbers;
open c_numbers;
fetch c_numbers bulk collect into for_set;
close c_numbers;
-- Make sparse
while_set.delete(3);
for_set.delete(3);
-- loop through with while
while_index := while_set.first;
while while_index is not null
loop
begin
dbms_output.put_line('While_Set(' ||
while_index ||
') = ' ||
while_set(while_index).num
);
while_index := while_set.next(while_index);
exception
when others then
dbms_output.put_line('Error in While_Set(' ||
while_index ||
') Message=' ||
sqlerrm
);
end;
end loop;
-- loop through with for
for for_index in for_set.first .. for_set.last
loop
begin
dbms_output.put_line('For_Set(' ||
for_index ||
') = ' ||
for_set(for_index).num
);
exception
when others then
dbms_output.put_line('Error in For_Set(' ||
for_index ||
') Message=' ||
sqlerrm
);
end;
end loop;
end;
Also try a for loop with a collection defines as:
type state_populations_t is table of number index by varchar2(20);
state_populations state_populations_t;
And yes, that line is in production code and has run for years,

inserting of data not happening in oracle

I am new to plsql.I have a table where i need to insert the data(some dummy data).So,i thought to use plsql block and using For loop it will insert the data automatically.The plsql block is runned successfully,but the data are stored as empty.The block I tried is:
declare
v_number1 number;
v_number2 number;
v_number3 number;
begin
For Lcntr IN 2..17
LOOP
v_number1 := v_number1+1;
v_number2 := v_number2+2;
v_number3 := v_number3+3;
Insert into stu.result(res_id,stu_id,eng,maths,science) values (stu.seq_no.NEXTVAL,Lcntr,v_number1,v_number2,v_number3);
END LOOP;
end;
But my table is loaded as:(please ignore first two row data,i inserted it manually):
The data for eng,maths,science is not being inserted.why it is happening so?
That's because your variables are NULL. NULL + 1 = NULL as well.
If you modify declaration to
v_number1 number := 0;
v_number2 number := 0;
v_number3 number := 0;
something might happen.

Best practices to purge millions of data in oracle

I need to 1 billion data which 10 years before records from a table tblmail , for that I have created the below procedure.
I am doing through batch size.
CREATE OR REPLACE PROCEDURE PURGE_Data AS
batch_size INTEGER := 1000;
pvc_procedure_name CONSTANT VARCHAR2(50) := 'Purge_data';
pvc_info_message_num CONSTANT NUMBER := 1;
pvc_error_message_type CONSTANT VARCHAR2(5) := 'ERROR';
v_message schema_mc.db_msg_log.message%TYPE;
v_msg_num schema_mc.db_msg_log.msg_num%TYPE;
/*
Purpose: Provide stored procedures to be used to purge unwanted archives.
*/
BEGIN
Delete from tblmail where createdate_dts < (SYSDATE - INTERVAL '10' YEAR) and ROWNUM <= batch_size;
COMMIT;
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
v_msg_num := SQLCODE;
v_message := 'Error deleting from tblmail table';
INSERT INTO error_log
(date, num, type, source, mail)
VALUES
(systimestamp, v_msg_num, pvc_error_message_type,pvc_procedure_name, v_message);
COMMIT;
END;
Do I need to use bulk collect and delete? What is the best way to do this?
As always in computing it depends. Provided that you have an index on createdate_dts your procedure should work, but how do you know when to stop calling it? I tend to use a loop:
loop
delete /*+ first_rows */ from tblmail where createdate_dts <
(SYSDATE - INTERVAL '10' YEAR) and ROWNUM <= batch_size;
v_rows := SQL%ROWCOUNT;
commit;
exit when v_rows = 0;
end loop;
You could also return the number of deleted records if you want to keep the loop outside of the procedure. Without an index on createdate_dts it may be cheaper to collect the primary keys for the rows to delete in one pass first and then loop over them, deleting batch size records per commit with bulk collect or something. However, when possible it is always nice to use a simple solution! You may want to experiment a bit in order to find the best batch size.

Performance issues with Before INSERT Trigger, is taking lot of time compared to calling the Procedure Directly after insert

I have a before INSERT trigger calling my procedure to process the XML in the XMLTYPE fields in the STAGE_TBL and insert the data into PROCESSED_DATA_TBL
I have to go for Before INSERT trigger(I can use Compound Trigger as well but I didnt tried it yet) in order to update the status on STAGE_TBL row based on the outcome from processing the XML.
The issue I am having is my XML can be huge it can have about 100 - 2000 rp_sendRow chunks, if it is huge, then the trigger is taking so much time. I tried with 100 rp_sendRow and it takes about 4 minutes thru trigger.
But if I disable trigger and insert into STAGE_TBL and then call the XML_PROCESS for the newly inserted record using the ID, then its completing(Process XML and insert into PROCESSED_DATA_TBL) in less than a second from SQL Developer.
I cannot use regular SQL Insert huge XML from SQL Developer as there is a 4000 character limit, as the Database is not on my local, I cannot even use the XMLType(bfilename('XMLDIR', 'MY.xml') option so I am using JDBC code to insert huge XML.
I have called the XML_PROCESS directly from JDBC for the same XML and it took less than a second to process and insert into PROCESSED_DATA_TBL
Please let me know why the Trigger is taking time ?
I am using Oracle 11g, SQL Developer 4.1.0.19
--Trigger Code
create or replace TRIGGER STAGE_TRIGGER
BEFORE INSERT ON STAGE_TBL
FOR EACH ROW
DECLARE
ROW_COUNT NUMBER;
PROCESS_STATUS VARCHAR2(1);
STATUS_DESCRIPTION VARCHAR2(300);
BEGIN
XML_PROCESS(:NEW.ID, :NEW.XML_DOCUMENT, PROCESS_STATUS, STATUS_DESCRIPTION, ROW_COUNT);
IF(ROW_COUNT > 0) THEN
:NEW.STATUS := PROCESS_STATUS;
:NEW.STATUS_DATE := SYSDATE;
:NEW.STATUS_DESCRIPTION := STATUS_DESCRIPTION;
:NEW.SHRED_TS := SYSTIMESTAMP;
ELSE--This is to handle 0 records inserted scenario & exception scenarios
:NEW.STATUS := STATUS.ERROR;
:NEW.STATUS_DATE := SYSDATE;
:NEW.STATUS_DESCRIPTION := STATUS_DESCRIPTION;
END IF;
EXCEPTION
WHEN OTHERS THEN
:NEW.STATUS := PROCESS_STATUS;
:NEW.STATUS_DESCRIPTION := STATUS_DESCRIPTION;
NULL;
END STAGE_TRIGGER;
--Stored Procedure
create or replace PROCEDURE XML_PROCESS (ID IN RAW, xData IN XMLTYPE, PROCESS_STATUS OUT VARCHAR2, STATUS_DESCRIPTION OUT VARCHAR2, ROW_COUNT OUT NUMBER) AS
BEGIN
INSERT ALL INTO PROCESSED_DATA_TBL
(ID,
STORE,
SALES_NBR,
UNIT_COST,
ST_FLAG,
ST_DATE,
ST,
START_QTY,
START_VALUE,
START_ON_ORDER,
HAND,
ORDER,
COMMITED,
SALES,
RECEIVE,
VALUE,
COST,
ID_1,
ID_2,
ID_3,
UNIT_PRICE,
EFFECTIVE_DATE,
STATUS,
STATUS_DATE,
STATUS_REASON)
VALUES (ID
,storenbr
,SalesNo
,UnitCost
,StWac
,StDt
,St
,StartQty
,StartValue
,StartOnOrder
,Hand
,Order
,Commit
,Sales
,Rec
,Value
,Id1
,Id2
,Id3
,UnitPrice
,to_Date(EffectiveDate||' '||EffectiveTime, 'YYYY-MM-DD HH24:MI:SS')
,'N'
,SYSDATE
,'XML PROCESS INSERT')
SELECT E.* FROM XMLTABLE('rp_send/rp_sendRow' PASSING xData COLUMNS
store VARCHAR(20) PATH 'store'
,SalesNo VARCHAR(20) PATH 'sales'
,UnitCost NUMBER PATH 'cost'
,StWac VARCHAR(20) PATH 'flag'
,StDt DATE PATH 'st-dt'
,St NUMBER PATH 'st'
,StartQty NUMBER PATH 'qty'
,StartValue NUMBER PATH 'value'
,StartOnOrder NUMBER PATH 'order'
,Hand NUMBER PATH 'hand'
,Order NUMBER PATH 'order'
,Commit NUMBER PATH 'commit'
,Sales NUMBER PATH 'sales'
,Rec NUMBER PATH 'rec'
,Value NUMBER PATH 'val'
,Id1 VARCHAR(30) PATH 'id-1'
,Id2 VARCHAR(30) PATH 'id-2'
,Id3 VARCHAR(30) PATH 'id-3'
,UnitPrice NUMBER PATH 'unit-pr'
,EffectiveDate VARCHAR(30) PATH 'eff-dt'
,EffectiveTime VARCHAR(30) PATH 'eff-tm'
) E;
ROW_COUNT := SQL%ROWCOUNT;
PROCESS_STATUS := STATUS.PROCESSED;
STATUS_DESCRIPTION := ROW_COUNT || ' Rows Successfully Inserted ';
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
BEGIN
ROW_COUNT := 0;
PROCESS_STATUS := STATUS.ERROR;
STATUS_DESCRIPTION := SUBSTR(SQLERRM, 1, 250);
END;
WHEN OTHERS THEN
BEGIN
ROW_COUNT := 0;
PROCESS_STATUS := STATUS.ERROR;
STATUS_DESCRIPTION := SUBSTR(SQLERRM, 1, 250);
END;
END XML_PROCESS;
--Standalone Procedure calling XML_PROCESS
SET DEFINE OFF
DECLARE
ROW_COUNT NUMBER;
PROCESS_STATUS VARCHAR2(1);
STATUS_DESCRIPTION VARCHAR2(300);
V_ID NUMBER;
V_XML XMLTYPE;
BEGIN
SELECT ID, XML_DOCUMENT INTO V_ID, V_XML FROM STAGE_TBL WHERE ID = '7954';
XML_PROCESS(ID, V_XML, PROCESS_STATUS, STATUS_DESCRIPTION, ROW_COUNT);
update STAGE_TBL SET STATUS = PROCESS_STATUS,
STATUS_DATE = SYSDATE,
STATUS_DESCRIPTION = STATUS_DESCRIPTION
WHERE ID = V_ID;
END;
XML
<?xml version = \"1.0\" encoding = \"UTF-8\"?>
<rp_send xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">
<rp_sendRow>
<store>0123</store>
<sales>022399190</sales>
<cost>0.01</cost>
<flag>true</flag>
<st-dt>2013-04-19</st-dt>
<st>146.51</st>
<qty>13.0</qty>
<value>0.0</value>
<order>0.0</order>
<hand>0.0</hand>
<order>0.0</order>
<commit>0.0</commit>
<sales>0.0</sales>
<rec>0.0</rec>
<val>0.0</val>
<id-1/>
<id-2/>
<id-3/>
<unit-pr>13.0</unit-pr>
<eff-dt>2015-06-16</eff-dt>
<eff-tm>09:12:21</eff-tm>
</rp_sendRow>
</rp_send>
There are two many unknown variables to determine the problem, but with this information I see four (edited to include more answers) possible answers:
1) If you are inserting many rows in only one statement (INSERT ... SELECT) the trigger will slow performance.
But your standalone procedure call example operates with only one row (ID = '7954'), so I assume the problem persist with only one tuple insertion. In this case 1) is not the problem.
2) You have some kind of index on STAGE_TBL(XML_DOCUMENT). When the BEFORE INSERT trigger is called the XMLType is not indexed and your trigger calls the procedure with a non-indexed version of XML_DOCUMENT. But in your standalone procedure example, XML_DOCUMENT is inserted and indexed, so the procedure uses the index.
Complex indexes on complex objects can be used by oracle optimizer not only when selecting data from a table, but they can be used when processing the data itself. This means: if you have an index on a particular data it can be used by a procedure that use this data. And Oracle's XMLType are complex objects that can be indexed in many, many ways (see: http://docs.oracle.com/cd/B28359_01/appdev.111/b28369/xdb_indexing.htm#CHDCGACG).
I think that XMLTABLE function is being optimized when XML_DOCUMENT is actually inserted in STAGE_TBL.
You can test it by calling your standalone procedure with a XML_DOCUMENT not extracted from STAGE_TBL (or any table that could index the document). In this case both, trigger and standalone, performances should be similar.
EDITED: You comment that you have tested the second answer and the performance problem persists. So I include a third option:
3) You have included a XML validation check constraint in STAGE_TBL. And this validation is the source of the performance difference. The standalone example does not validate the XML document, but the insert validates it.
You can check if this is what is happening by disabling the trigger. If the insert without the trigger is still slow then the problem is not the trigger, but it is the XML validation.
EDITED: You comment that you have tested the third answer and the performance problem persists. So I include a fourth option:
4) In (https://community.oracle.com/thread/2526907) a performance problem with XMLTable is described when working with big XML documents. They comment that using TABLE(XMLSequence()) approach is better in these cases, because XMLTable creates big intermediate results, and TABLE(XMLSequence()) does not.
So in your INSERT statement change your SELECT from:
SELECT E.* FROM XMLTABLE('rp_send/rp_sendRow' PASSING xData COLUMNS
store VARCHAR(20) PATH 'store'
,SalesNo VARCHAR(20) PATH 'sales'
,UnitCost NUMBER PATH 'cost'
,StWac VARCHAR(20) PATH 'flag'
,StDt DATE PATH 'st-dt'
...
...
,EffectiveTime VARCHAR(30) PATH 'eff-tm'
) E;
To:
SELECT value(e).extract('//store/text()').getStringVal() store,
value(e).extract('//sales/text()').getStringVal() SalesNo,
value(e).extract('//cost/text()').getNumberVal() UnitCost,
value(e).extract('//flag/text()').getStringVal() StWac,
to_date(value(e).extract('//st-dt/text()').getStringVal(),'YYYY-MM-DD') StDt,
...
...
value(e).extract('//eff-tm/text()').getStringVal() EffectiveTime
FROM TABLE(XMLSEQUENCE(EXTRACT(xData, '/rp_send/rp_sendRow'))) e;

Can I see the DML inside an Oracle trigger?

Is it possible to see the DML (SQL Statement) that is being run that caused a trigger to be executed?
For example, inside an INSERT trigger I would like to get this:
"insert into myTable (name) values ('Fred')"
I read about ora_sql_txt(sql_text) in articles such as this but couldn't get it working - not sure if that is even leading me down the right path?
We are using Oracle 10.
Thank you in advance.
=========================
[EDITED] MORE DETAIL: We have the need to replicate an existing database (DB1) into a classified database (DB2) that is not accessible via the network. I need to keep these databases in sync. This is a one-way sync from (DB1) to (DB2), since (DB2) will contain additional tables and data that is not contained in the (DB1) system.
I have to determine a way to sync these databases without bringing them down (say, for a backup and restore) because it needs to stay live. So I thought that if I can store the actual DML being run (when data changes), I could "play-back" the DML on the new database to update it, just like someone was hand-entering it back in.
I can't bring over all the data because of the sheer size of it, and I can't just copy over the changed records because of FK constraints and the order in which I insert/update records. I figured that if I could "play-back" a log of what happened, using the exact SQL that changed the master, I could keep the databases in sync.
My current plan of attack was to keep a log of all records that were changed, inserted, and deleted and when I want to sync, the system generates DML to insert/update/delete those records. Then I just take the .SQL file to the classified system and run the script. The problem I'm running into are FKs. (Because when I generate the DML I only know what the current state of the data is, not it's path to get there - so ordering of statements is an issue). I guess I could disable all FK's, do the merge, then re-enable all FK's...
So - does my approach of storing the actual DML as-it-happens suck pondwater, or is there a better solution???
"does my approach of storing the actual DML as-it-happens suck pondwater?" Yes..
Strict ordering of the DML on your DB1 does not really exist. Multiple processes, muiltiple cores, things essentially happening at the essentially the same time.
And the DML, even when it happens sequentially doesn't act like it. Say the following two update statements run in seperate processes with seperate transactions, where the update in transaction 2 starts before transaction 1 commits:
update table_a set col_a = 10 where col_b = 'A' -- transaction 1
update table_a set col_c = 'Error' where col_a = 10 -- transaction 2
Since the changes made in the first transaction are not visibible to the second transaction, the rows changed by the second transaction will not include those of the first. But if you manage to capture the DML and replay it sequentially, transaction 1's changes will be visible, so transaction 2's changes will be different. (See pages 40 and 41 of Tom Kyte's Expert Oracle Database Architecture Second Edition.)
Hopefully you are using bind variables, so the DML by itself wouldn't be meaningful: update table_a set col_a = :col_a where id = :id Now what? Ok, so you want the DML with it's variable bindings.
Do you use sequences? If so, the next_val will not stay in synch between DB1 and DB2. (For example, instance failures can cause lost values, are both systems going to fail at the same time?) And if you are dealing with RAC, where the next_val varies depending on node, forget it.
I would start by investigating Oracle's replication.
I had a situation where I needed to move metadata/configuration changes (stored in a handful of tables) from a development environment to a production environment once tested. Something like Goldengate is the product to use for this but this can be costly and complicated to set up and administer.
The following procedure generates a trigger and attaches it to a table that needs the DML saved. The trigger re-creates the DML and in the following case saves it to an audit table - its up to you what you do with it. You can use the statements saved to the audit table to replay changes from a given point in time (cut and paste or develop a procedure to apply them to the target).
Hope you find this useful.
procedure gen_trigger( p_tname in varchar2 )
is
l_theCursor integer default dbms_sql.open_cursor;
l_query varchar2(1000) default 'select * from ' || p_tname;
l_colCnt number := 0;
l_descTbl dbms_sql.desc_tab;
trg varchar(32767) := null;
expr varchar(32767) := null;
cmd varchar(32767) := null;
begin
dbms_sql.parse( l_theCursor, l_query, dbms_sql.native );
dbms_sql.describe_columns( l_theCursor, l_colCnt, l_descTbl );
trg := q'#
create or replace trigger <%TABLE_NAME%>_audit
after insert or update or delete on <%TABLE_NAME%> for each row
declare
qs varchar2(20) := q'[q'^]';
qe varchar2(20) := q'[^']';
command clob;
nlsd varchar2(100);
begin
select value into nlsd from nls_session_parameters where parameter = 'NLS_DATE_FORMAT';
execute immediate 'alter session set nls_date_format = ''YYYY/MM/DD hh24:mi:ss'' ';
if inserting then
command := <%INSERT_COMMAND%>;
end if;
if updating then
command := <%UPDATE_COMMAND%>;
end if;
if deleting then
command := <%DELETE_COMMAND%>;
end if;
insert into x_audit values (systimestamp, command);
execute immediate q'+alter session set nls_date_format = '+'|| nlsd || q'+'+';
end;
#';
-- Create the insert command
cmd := q'#'insert into <%TABLE_NAME%> (<%INSERT_COLS%>) values ('||<%INSERT_VAL%>||')'#';
-- columns clause
for i in 1 .. l_colCnt loop
if expr is not null then
expr := expr || ',';
end if;
expr := expr || l_descTbl(i).col_name;
end loop;
cmd := replace(cmd,'<%INSERT_COLS%>',expr);
-- values clause
expr := null;
for i in 1 .. l_colCnt loop
if expr is not null then
expr := expr || q'#||','||#';
end if;
expr := expr || 'qs||:new.' || l_descTbl(i).col_name || '||qe';
end loop;
cmd := replace(cmd,'<%INSERT_VAL%>',expr);
trg := replace(trg,'<%INSERT_COMMAND%>',cmd);
-- create the update command
-- set clause
expr := null;
cmd := q'#'update <%TABLE_NAME%> set '||<%UPDATE_COLS%>||' where '||<%WHERE_CLAUSE%>#';
for i in 1 .. l_colCnt loop
if expr is not null then
expr := expr || q'#||','||#';
end if;
expr := expr || q'#'#' || l_descTbl(i).col_name || q'# = '||#'|| 'qs||:new.'||l_descTbl(i).col_name || '||qe';
end loop;
null;
cmd := replace(cmd,'<%UPDATE_COLS%>',expr);
trg := replace(trg,'<%UPDATE_COMMAND%>',cmd);
-- create the delete command
expr := null;
cmd := q'#'delete <%TABLE_NAME%> where '||<%WHERE_CLAUSE%>#';
trg := replace(trg,'<%DELETE_COMMAND%>',cmd);
-- where clause using primary key columns (used by update and delete)
expr := null;
for pk in (SELECT column_name FROM all_cons_columns WHERE constraint_name = (
SELECT constraint_name FROM user_constraints
WHERE UPPER(table_name) = UPPER(p_tname) AND CONSTRAINT_TYPE = 'P'
)) loop
if expr is not null then
expr := expr || q'#|| ' and '||#';
end if;
expr := expr || q'#'#' || pk.column_name || q'# = '||#'|| 'qs||:old.'|| pk.column_name || '||qe';
end loop;
if expr is null then -- must have a primary key
raise_application_error(-20000,'The table must have a primary key defined');
end if;
trg := replace(trg,'<%WHERE_CLAUSE%>',expr);
trg := replace(trg,'<%TABLE_NAME%>',p_tname);
execute immediate trg;
null;
exception
when others then
execute immediate 'alter session set nls_date_format=''YYYY/MM/DD'' ';
raise;
end;
/* Example
create table t1 (
col1 varchar2(100),
col2 number,
col3 date,
constraint pk_t1 primary key (col1)
)
/
BEGIN
GEN_TRIGGER('T1');
END;
/
-- Trigger generated ....
create or replace trigger t1_audit after
insert or
update or
delete on t1 for each row
declare
qs varchar2(20) := q'[q'^]';
qe varchar2(20) := q'[^']';
command clob;
nlsd varchar2(100);
begin
select value into nlsd from nls_session_parameters where parameter = 'NLS_DATE_FORMAT';
execute immediate 'alter session set nls_date_format = ''YYYY/MM/DD hh24:mi:ss'' ';
if inserting then
command := 'insert into T1 (COL1,COL2,COL3) values ('||qs||:new.col1||qe||','||qs||:new.col2||qe||','||qs||:new.col3||qe||')';
end if;
if updating then
command := 'update T1 set '||'COL1 = '||qs||:new.col1||qe||','||'COL2 = '||qs||:new.col2||qe||','||'COL3 = '||qs||:new.col3||qe||' where '||'COL1 = '||qs||:old.col1||qe;
end if;
if deleting then
command := 'delete T1 where '||'COL1 = '||qs||:old.col1||qe;
end if;
insert into x_audit values
(systimestamp, command
);
execute immediate q'+alter session set nls_date_format = '+'|| nlsd || q'+'+';
end;
*/
That function only works for 'event' triggers as discussed here.
You should look into Fine-Grained Auditing as a mechanism for this. Details here
When the trigger code runs don't you already know the dml that caused it to run?
CREATE OR REPLACE TRIGGER Print_salary_changes
BEFORE INSERT OR UPDATE ON Emp_tab
FOR EACH ROW
...
In this case it must have been an insert or an update statement on the emp_tab table.
To find out if it was an update or an insert
if inserting then
...
elsif updating then
...
end if;
The exact column values are available in the :old and :new pseudo-columns.

Resources