I have 10.000 rows to insert into a table.
For an Insert With FORALL in oracle...
FORALL x IN TABLE_NAME.First .. TABLE_NAME.Last
INSERT
INTO TABLE_NAME VALUES
(
TABLE_NAME(x).VAL1,
TABLE_NAME(x).VAL2,
TABLE_NAME(x).VAL3,
TABLE_NAME(x).VAL4,
TABLE_NAME(x).VAL5
);
How can I recover the data from rows that doesnt insert into table due violation of constrainst in order to insert those items in a non-typed table of rejected items?
You can use the save exceptions clause of the forall statement to collect the error(s), and then use the sql%bulk_exceptions implicit cursor attribute to see what actually happened. There's an example in that documentation, but in your case you can do (using a made up table and data):
create table your_table (val1 number primary key, val2 number, val3 number, val4 number, val5 number);
declare
type l_table_type is table of your_table%rowtype;
l_table l_table_type := l_table_type();
dml_errors exception;
pragma exception_init(dml_errors, -24381);
begin
l_table.extend;
l_table(l_table.count).val1 := 1;
l_table(l_table.count).val2 := 1.2;
l_table(l_table.count).val3 := 1.3;
l_table(l_table.count).val4 := 1.4;
l_table(l_table.count).val5 := 1.5;
l_table.extend;
l_table(l_table.count).val1 := 2;
l_table(l_table.count).val2 := 2.2;
l_table(l_table.count).val3 := 2.3;
l_table(l_table.count).val4 := 2.4;
l_table(l_table.count).val5 := 2.5;
l_table.extend;
l_table(l_table.count).val1 := 1;
l_table(l_table.count).val2 := 3.2;
l_table(l_table.count).val3 := 3.3;
l_table(l_table.count).val4 := 3.4;
l_table(l_table.count).val5 := 3.5;
forall x in l_table.first .. l_table.last save exceptions
insert
into your_table values
(
l_table(x).val1,
l_table(x).val2,
l_table(x).val3,
l_table(x).val4,
l_table(x).val5
);
exception
when dml_errors then
for i in 1..sql%bulk_exceptions.count loop
dbms_output.put_line('Index ' || sql%bulk_exceptions(i).error_index
|| ' error ' || -sql%bulk_exceptions(i).error_code);
dbms_output.put_line(' val1: ' || l_table(sql%bulk_exceptions(i).error_index).val1);
dbms_output.put_line(' val2: ' || l_table(sql%bulk_exceptions(i).error_index).val2);
dbms_output.put_line(' val3: ' || l_table(sql%bulk_exceptions(i).error_index).val3);
dbms_output.put_line(' val4: ' || l_table(sql%bulk_exceptions(i).error_index).val4);
dbms_output.put_line(' val5: ' || l_table(sql%bulk_exceptions(i).error_index).val5);
end loop;
end;
/
That produces the output:
PL/SQL procedure successfully completed.
Index 3 error -1
val1: 1
val2: 3.2
val3: 3.3
val4: 3.4
val5: 3.5
The first collection element with val1 set to one was inserted successfully; the second one got the unique constraint exception and was not - but it was put into the bulk exception mechanism instead of causing the entire statement to fail.
You can then decide whether to raise an exception (or re-raise), or immediately roll back (possibly to a savepoint); or commit the inserts that did not error.
You can also insert the same values into your table of rejected items, but you'd need to be a bit careful if you're rolling back other changes (presumably you wouldn't be in that scenario though).
You can't directly use another forall to do that - referring to l_table(sql%bulk_exceptions(i).error_index).val1 inside the values() clause throws ORA-00911 because of the % character - so you'd either have to do individual inserts inside the for loop, or copy the values to another collection and bulk-insert from that. Unless you are expecting a lot of rejections individual inserts might be good enough.
Related
I hope someone can explain me why I'm seeing ORA-01555. I have a function and a procedure to perform a cleanup in a huge table:
-- Small cleanup in separate transaction.
FUNCTION clean_single(ts_until IN TIMESTAMP, datapoint_id IN NUMBER) RETURN NUMBER IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
DELETE FROM VALUES WHERE DATAPOINT_ID = datapoint_id AND TS < ts_until;
COMMIT;
RETURN sql%rowcount;
END clean_single;
PROCEDURE prc_clean IS
count_all_deleted_vals NUMBER(10) := 0;
BEGIN
BEGIN
FOR dps IN (
SELECT x AS dpid, y as tsUntil FROM Z where some conditions
LOOP
count_all_deleted_vals := count_all_deleted_vals + clean_single(dps.tsUntil, dps.dpid);
END LOOP;
DBMS_OUTPUT.PUT_LINE('Removed ' || count_all_deleted_vals || ' values');
END;
END prc_clean;
The idea is to run prc_clean() from a job and after the relevant datapoint ids are selected, the deletion per datapoint-id is done in a single transaction to avoid having one huge transaction.
But when I run this it runs for a while and then fails with ORA-01555.
In detail i do not understand why this is happening. Why does the PRAGMA AUTONOMOUS_TRANSACTION; in the function not prevent this?
What can I do to prevent it?
As far as I can tell, cause of ORA-01555 is often committing within a loop - and that's exactly what you're doing.
Skip the function altogether (doing DML in it is usually wrong anyway) and use only procedure, e.g.
PROCEDURE prc_clean
IS
count_all_deleted_vals NUMBER (10) := 0;
BEGIN
FOR dps IN (SELECT x AS dpid, y AS tsUntil
FROM Z
WHERE some conditions)
LOOP
DELETE FROM VALUES
WHERE DATAPOINT_ID = dps.dpid
AND TS < dps.tsuntil;
count_all_deleted_vals := count_all_deleted_vals + SQL%ROWCOUNT;
END LOOP;
COMMIT;
DBMS_OUTPUT.PUT_LINE ('Removed ' || count_all_deleted_vals || ' values');
END;
I have a big Oracle script with thousands of package call inside a BEGIN - END;
Is there a way to ignore the lines that causes error and continue executing the next lines? Some sort of "On Error Resume Next" in vb.
If you have only one BEGIN END section, then you can use EXCEPTION WHEN OTHERS THEN NULL.
SQL> declare
v_var pls_integer;
begin
select 1 into v_var from dual;
-- now error
select 'A' into v_var from dual;
exception when others then null;
end;
SQL> /
PL/SQL procedure successfully completed.
SQL> declare
v_var pls_integer;
begin
select 1 into v_var from dual;
-- now error
select 'A' into v_var from dual;
--exception when others then null;
end;
/
declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at line 6
SQL>
The whole concept of "ignore errors" is a bug, and a lie if any errors occur. That is not to say you cannot trap errors and continue processing, just that you MUST handle the errors. For example, assume the use case: "Data has been loaded into a stage table from multiple .csv files. Now load into the tables A and Table B according to ....".
create procedure
Load_Tables_A_B_from_Stage(process_message out varchar2)
is
Begin
For rec in (select * from stage)
loop
begin
insert into table_a (col1, col2)
values (rec.col_a1, col_a2);
insert into table_b (col1, col2)
values (rec.col_b1, col_b2);
exception
when others then null;
end;
end loop;
process_message := 'Load Tables A,B Complete';
end ;
Now suppose a user created the a .csv file entered "n/a" in numeric columns where there was no value or the value was unknown. The result of this all too common occurrence is all such rows were not loaded, but you have no way to know that until the user complains their data was not loaded even though you told them it was. Further you have no way of determining the problem.
A much better approach is to "capture and report".
create procedure
Load_Tables_A_B_from_Stage(process_message out varchar2)
is
load_error_occurred boolean := False;
Begin
For rec in (select * from stage)
loop
begin
insert into table_a (col1, col2)
values (rec.col_a1, rec.col_a2);
exception
when others then
log_load_error('Load_Tables_A_B_from_Stage', stage_id, sqlerrm);
load_error_occurred := True;
end;
begin
insert into table_b (col1, col2)
values (rec.col_b1, rec.col_b2);
exception
when others then
log_load_error('Load_Tables_A_B_from_Stage', stage_id, sqlerrm);
load_error_occurred := True;
end;
end loop;
if load_error_occurred then
process_message := 'Load Tables A,B Complete: Error(s) Detected';
else
process_message := 'Load Tables A,B Complete: Successful No Error(s)';
end if;
end Load_Tables_A_B_from_Stage ;
Now you have informed the user of the actual status, and where you are contacted you can readily identify the issue.
User here is used in the most general sense. It could mean a calling routine instead of an individual. Point is you do not have to terminate your process due to errors but DO NOT ignore them.
I don't think there is any magic one-liner that will solve this.
As others have, use a editor to automate the wrapping of each call within a BEGIN-EXCEPTION-END block might be quicker/easier.
But, if feel a little adventurous, or try this strategy:
Let's assume you have this:
BEGIN
proc1;
proc2;
proc3;
.
.
.
proc1000;
END;
You could try this (untested, uncompiled but might give you an idea of what to try):
DECLARE
l_progress NUMBER := 0;
l_proc_no NUMBER := 0;
e_proc_err EXCEPTION;
-- A 'runner' procedure than manegrs the counters and runs/skips dpending on these vals
PROCEDURE run_proc ( pname IN VARCHAR2 ) IS
BEGIN
l_proc_no := l_proc_no + 1;
IF l_proc_no >= l_progress
THEN
-- log 'Running pname'
EXECUTE IMMEDIATE 'BEGIN ' || pname || '; END;' ;
l_progress := l_progress + 1;
ELSE
-- log 'Skipping pname'
END IF;
EXCEPTION
WHEN OTHERS THEN
-- log 'Error in pname'
l_progress := l_progress + 1;
RAISE e_proc_err;
END;
BEGIN
l_progress := 0;
<<start>>
l_proc_no := 0;
run_proc ( 'proc1' );
run_proc ( 'proc2' );
run_proc ( 'proc3' );
.
.
run_proc ( 'proc1000' );
EXCEPTION
WHEN e_proc_err THEN
GOTO start;
WHEN OTHERS THEN
RAISE;
END;
The idea here is to add a 'runner' procedure to execute each procedure dynamically and log the run, skip, error.
We maintain a global count of the current process number (l_proc_no) and overall count of steps executed (l_progress).
When an error occurs we log it, raise it and let it fall into the outer blocks EXCEPTION handler where it will restart via an (evil) GOTO.
The GOTO is placed such that the overall execution count is unchanged but the process number is reset to 0.
Now when the run_proc is called it sees that l_progress is greater than l_proc_no, and skips it.
Why is this better than simply wrapping a BEGIN EXCEPTION END around each call?
It might not be, but you make a smaller change to each line of code, and you standardise the logging around each call more neatly.
The danger is a potential infinite loop which is why I specify e_proc_err to denote errors within the called procedures. But it might need tweaking to make it robust.
How to handle cursor exception when the select query returns "zero" records
I have a cursor in a procedure, and after cursor initialization I'm iterating through the cursor to access the data from it.
But the problem is when the cursor select query returns 0 records then it throws exception
ORA-06531: Reference to uninitialized collection.
How to handle this exception?
---procedure code
create or replace PROCEDURE BIQ_SECURITY_REPORT
(out_chr_err_code OUT VARCHAR2,
out_chr_err_msg OUT VARCHAR2,
out_security_tab OUT return_security_arr_result ,
)
IS
l_chr_srcstage VARCHAR2 (200);
lrec return_security_report;
CURSOR cur_security_data IS
SELECT
"ID" "requestId",
"ROOM" "room",
"FIRST_NAME" "FIRST_NAME",
"LAST_NAME" "LAST_NAME",
FROM
"BI_REQUEST_CATERING_ACTIVITY" ;
TYPE rec_security_data IS TABLE OF cur_security_data%ROWTYPE
INDEX BY PLS_INTEGER;
l_cur_security_data rec_security_data;
begin
OPEN cur_security_data;
LOOP
FETCH cur_security_data
BULK COLLECT INTO l_cur_security_data
LIMIT 1000;
EXIT WHEN l_cur_security_data.COUNT = 0;
lrec := return_security_report();
out_security_tab := return_security_arr_result(return_security_report());
out_security_tab.delete;
FOR i IN 1 .. l_cur_security_data.COUNT
LOOP
BEGIN
l_num_counter := l_num_counter + 1;
lrec := return_security_report();
lrec.requestid := l_cur_security_data(i).requestId ; lrec.room := l_cur_security_data(i).room ; lrec.firstName := l_cur_security_data(i).firstName ;
IF l_num_counter > 1
THEN
out_security_tab.extend();
out_security_tab(l_num_counter) := return_security_report();
ELSE
out_security_tab := return_security_arr_result(return_security_report());
END IF;
out_security_tab(l_num_counter) := lrec;
EXCEPTION
WHEN OTHERS
THEN
DBMS_OUTPUT.PUT_LINE('Error occurred : ' || SQLERRM);
END;
END LOOP;
END LOOP;
EXCEPTION
WHEN OTHERS
THEN
DBMS_OUTPUT.PUT_LINE ('HERE INSIIDE OTHERS' || SQLERRM);
END;
Can you please explain how handle it.
You must be using out_security_tab, which is an output parameter in some other code where the procedure is called.
In your procedure, If cursor returns zero rows then the loop will not be executed and your code will not even initialize the out_security_tab which will lead to the error that you are facing.
There is a simple way to avoid:
initialize out_security_tab outside the loop -- which will definitely initialize it
You can create one out variable containing details as Y or N based on if cursor rows count -- Not recommended
Cheers!!
I want to use FORALL to insert data into a table. But, in my below code I will not be able to
get l_final_amt and l_reference_number variables outside the FOR loop of l_tbl_table_test_retrieve.
How to use FORALL to insert data into a table when values are not in the given type?
CREATE OR REPLACE PACKAGE test_FORALL AS
PROCEDURE pr_test_FORALL;
END test_FORALL;
CREATE OR REPLACE PACKAGE BODY test_FORALL AS
PROCEDURE pr_test_FORALL IS
TYPE ty_tbl_table_test IS TABLE OF table_test%ROWTYPE INDEX BY BINARY_INTEGER;
l_tbl_table_test_retrieve ty_tbl_table_test;
l_tbl_table_test ty_tbl_table_test;
l_final_amt INTEGER;
l_reference_number VARCHAR2(100);
BEGIN
SELECT * BULK COLLECT
INTO l_tbl_table_test_retrieve
FROM table_test t1;
FOR i IN 1 .. l_tbl_table_test_retrieve.COUNT
LOOP
l_tbl_table_test(l_tbl_table_test.COUNT + 1) := l_tbl_table_test_retrieve(i);
l_final_amt := l_final_amt + 10;
l_reference_number := SYSDATE + l_tbl_table_test_retrieve(i).ID;
insert into some_other_table(fname, address,final_amt,ref_number)
values(l_tbl_table_test_retrieve(i).fname, l_tbl_table_test_retrieve(i).address,l_final_amt,l_reference_number);
END LOOP;
--I want to insert into some_other_table using FORALL. But,l_final_amt and l_reference_number variables
-- are not available in l_tbl_table_test_retrieve.
EXCEPTION
DBMS_OUTPUT.put_line('EXCEPTION occurred');
END;
END pr_test_FORALL;
END test_FORALL;
Use a cursor and add the fields into the rows returned by the cursor:
PROCEDURE pr_test_FORALL IS
DECLARE csrData AS CURSOR FOR
SELECT t1.*,
NULL AS COUNT_VAL,
NULL AS FINAL_AMT,
NULL AS REFERENCE_NUMBER
FROM TABLE_TEST t1;
TYPE ty_tbl_table_test IS
TABLE OF csrData%ROWTYPE -- Note: csrData%ROWTYPE
INDEX BY BINARY_INTEGER;
l_tbl ty_tbl_table_test;
l_final_amt INTEGER := 0;
l_reference_number VARCHAR2(100);
BEGIN
OPEN csrData
FETCH csrData
BULK COLLECT INTO l_tbl;
CLOSE csrData;
FOR i IN 1 .. l_tbl.COUNT LOOP
l_final_amt := l_final_amt + 10;
l_tbl(i).FINAL_AMT := l_final_amt;
l_tbl(i).REFERENCE_NUMBER := SYSDATE + l_tbl(i).ID;
END LOOP;
FORALL i IN l_tbl.FIRST..l_tbl.LAST
INSERT INTO SOME_OTHER_TABLE
(FNAME, ADDRESS, FINAL_AMT, REF_NUMBER)
VALUES
(l_tbl(i).FNAME,
l_tbl(i).ADDRESS,
l_tbl(i).FINAL_AMT,
l_tbl(i).REFERENCE_NUMBER);
EXCEPTION
DBMS_OUTPUT.put_line('EXCEPTION occurred');
END pr_test_FORALL;
You could convert the whole thing into two inserts of the below form into the required tables.
I see that in your code l_reference_number is defined as a VARCHAR2 variable but it sounds like a number. ( SYSDATE + some_number ) will yield a date type. It will be implicitly converted into a string based on your NLS_ settings when you assign it to a varchar2. I'm not sure what do you want to store in there as a "REFERENCE_NUMBER".
INSERT INTO some_other_table (
fname,
address,
final_amt,
ref_number
)
SELECT fname,
address,
10 * ROWNUM AS final_amt,
SYSDATE + id as reference_number
FROM table_test;
Is it possible to see the DML (SQL Statement) that is being run that caused a trigger to be executed?
For example, inside an INSERT trigger I would like to get this:
"insert into myTable (name) values ('Fred')"
I read about ora_sql_txt(sql_text) in articles such as this but couldn't get it working - not sure if that is even leading me down the right path?
We are using Oracle 10.
Thank you in advance.
=========================
[EDITED] MORE DETAIL: We have the need to replicate an existing database (DB1) into a classified database (DB2) that is not accessible via the network. I need to keep these databases in sync. This is a one-way sync from (DB1) to (DB2), since (DB2) will contain additional tables and data that is not contained in the (DB1) system.
I have to determine a way to sync these databases without bringing them down (say, for a backup and restore) because it needs to stay live. So I thought that if I can store the actual DML being run (when data changes), I could "play-back" the DML on the new database to update it, just like someone was hand-entering it back in.
I can't bring over all the data because of the sheer size of it, and I can't just copy over the changed records because of FK constraints and the order in which I insert/update records. I figured that if I could "play-back" a log of what happened, using the exact SQL that changed the master, I could keep the databases in sync.
My current plan of attack was to keep a log of all records that were changed, inserted, and deleted and when I want to sync, the system generates DML to insert/update/delete those records. Then I just take the .SQL file to the classified system and run the script. The problem I'm running into are FKs. (Because when I generate the DML I only know what the current state of the data is, not it's path to get there - so ordering of statements is an issue). I guess I could disable all FK's, do the merge, then re-enable all FK's...
So - does my approach of storing the actual DML as-it-happens suck pondwater, or is there a better solution???
"does my approach of storing the actual DML as-it-happens suck pondwater?" Yes..
Strict ordering of the DML on your DB1 does not really exist. Multiple processes, muiltiple cores, things essentially happening at the essentially the same time.
And the DML, even when it happens sequentially doesn't act like it. Say the following two update statements run in seperate processes with seperate transactions, where the update in transaction 2 starts before transaction 1 commits:
update table_a set col_a = 10 where col_b = 'A' -- transaction 1
update table_a set col_c = 'Error' where col_a = 10 -- transaction 2
Since the changes made in the first transaction are not visibible to the second transaction, the rows changed by the second transaction will not include those of the first. But if you manage to capture the DML and replay it sequentially, transaction 1's changes will be visible, so transaction 2's changes will be different. (See pages 40 and 41 of Tom Kyte's Expert Oracle Database Architecture Second Edition.)
Hopefully you are using bind variables, so the DML by itself wouldn't be meaningful: update table_a set col_a = :col_a where id = :id Now what? Ok, so you want the DML with it's variable bindings.
Do you use sequences? If so, the next_val will not stay in synch between DB1 and DB2. (For example, instance failures can cause lost values, are both systems going to fail at the same time?) And if you are dealing with RAC, where the next_val varies depending on node, forget it.
I would start by investigating Oracle's replication.
I had a situation where I needed to move metadata/configuration changes (stored in a handful of tables) from a development environment to a production environment once tested. Something like Goldengate is the product to use for this but this can be costly and complicated to set up and administer.
The following procedure generates a trigger and attaches it to a table that needs the DML saved. The trigger re-creates the DML and in the following case saves it to an audit table - its up to you what you do with it. You can use the statements saved to the audit table to replay changes from a given point in time (cut and paste or develop a procedure to apply them to the target).
Hope you find this useful.
procedure gen_trigger( p_tname in varchar2 )
is
l_theCursor integer default dbms_sql.open_cursor;
l_query varchar2(1000) default 'select * from ' || p_tname;
l_colCnt number := 0;
l_descTbl dbms_sql.desc_tab;
trg varchar(32767) := null;
expr varchar(32767) := null;
cmd varchar(32767) := null;
begin
dbms_sql.parse( l_theCursor, l_query, dbms_sql.native );
dbms_sql.describe_columns( l_theCursor, l_colCnt, l_descTbl );
trg := q'#
create or replace trigger <%TABLE_NAME%>_audit
after insert or update or delete on <%TABLE_NAME%> for each row
declare
qs varchar2(20) := q'[q'^]';
qe varchar2(20) := q'[^']';
command clob;
nlsd varchar2(100);
begin
select value into nlsd from nls_session_parameters where parameter = 'NLS_DATE_FORMAT';
execute immediate 'alter session set nls_date_format = ''YYYY/MM/DD hh24:mi:ss'' ';
if inserting then
command := <%INSERT_COMMAND%>;
end if;
if updating then
command := <%UPDATE_COMMAND%>;
end if;
if deleting then
command := <%DELETE_COMMAND%>;
end if;
insert into x_audit values (systimestamp, command);
execute immediate q'+alter session set nls_date_format = '+'|| nlsd || q'+'+';
end;
#';
-- Create the insert command
cmd := q'#'insert into <%TABLE_NAME%> (<%INSERT_COLS%>) values ('||<%INSERT_VAL%>||')'#';
-- columns clause
for i in 1 .. l_colCnt loop
if expr is not null then
expr := expr || ',';
end if;
expr := expr || l_descTbl(i).col_name;
end loop;
cmd := replace(cmd,'<%INSERT_COLS%>',expr);
-- values clause
expr := null;
for i in 1 .. l_colCnt loop
if expr is not null then
expr := expr || q'#||','||#';
end if;
expr := expr || 'qs||:new.' || l_descTbl(i).col_name || '||qe';
end loop;
cmd := replace(cmd,'<%INSERT_VAL%>',expr);
trg := replace(trg,'<%INSERT_COMMAND%>',cmd);
-- create the update command
-- set clause
expr := null;
cmd := q'#'update <%TABLE_NAME%> set '||<%UPDATE_COLS%>||' where '||<%WHERE_CLAUSE%>#';
for i in 1 .. l_colCnt loop
if expr is not null then
expr := expr || q'#||','||#';
end if;
expr := expr || q'#'#' || l_descTbl(i).col_name || q'# = '||#'|| 'qs||:new.'||l_descTbl(i).col_name || '||qe';
end loop;
null;
cmd := replace(cmd,'<%UPDATE_COLS%>',expr);
trg := replace(trg,'<%UPDATE_COMMAND%>',cmd);
-- create the delete command
expr := null;
cmd := q'#'delete <%TABLE_NAME%> where '||<%WHERE_CLAUSE%>#';
trg := replace(trg,'<%DELETE_COMMAND%>',cmd);
-- where clause using primary key columns (used by update and delete)
expr := null;
for pk in (SELECT column_name FROM all_cons_columns WHERE constraint_name = (
SELECT constraint_name FROM user_constraints
WHERE UPPER(table_name) = UPPER(p_tname) AND CONSTRAINT_TYPE = 'P'
)) loop
if expr is not null then
expr := expr || q'#|| ' and '||#';
end if;
expr := expr || q'#'#' || pk.column_name || q'# = '||#'|| 'qs||:old.'|| pk.column_name || '||qe';
end loop;
if expr is null then -- must have a primary key
raise_application_error(-20000,'The table must have a primary key defined');
end if;
trg := replace(trg,'<%WHERE_CLAUSE%>',expr);
trg := replace(trg,'<%TABLE_NAME%>',p_tname);
execute immediate trg;
null;
exception
when others then
execute immediate 'alter session set nls_date_format=''YYYY/MM/DD'' ';
raise;
end;
/* Example
create table t1 (
col1 varchar2(100),
col2 number,
col3 date,
constraint pk_t1 primary key (col1)
)
/
BEGIN
GEN_TRIGGER('T1');
END;
/
-- Trigger generated ....
create or replace trigger t1_audit after
insert or
update or
delete on t1 for each row
declare
qs varchar2(20) := q'[q'^]';
qe varchar2(20) := q'[^']';
command clob;
nlsd varchar2(100);
begin
select value into nlsd from nls_session_parameters where parameter = 'NLS_DATE_FORMAT';
execute immediate 'alter session set nls_date_format = ''YYYY/MM/DD hh24:mi:ss'' ';
if inserting then
command := 'insert into T1 (COL1,COL2,COL3) values ('||qs||:new.col1||qe||','||qs||:new.col2||qe||','||qs||:new.col3||qe||')';
end if;
if updating then
command := 'update T1 set '||'COL1 = '||qs||:new.col1||qe||','||'COL2 = '||qs||:new.col2||qe||','||'COL3 = '||qs||:new.col3||qe||' where '||'COL1 = '||qs||:old.col1||qe;
end if;
if deleting then
command := 'delete T1 where '||'COL1 = '||qs||:old.col1||qe;
end if;
insert into x_audit values
(systimestamp, command
);
execute immediate q'+alter session set nls_date_format = '+'|| nlsd || q'+'+';
end;
*/
That function only works for 'event' triggers as discussed here.
You should look into Fine-Grained Auditing as a mechanism for this. Details here
When the trigger code runs don't you already know the dml that caused it to run?
CREATE OR REPLACE TRIGGER Print_salary_changes
BEFORE INSERT OR UPDATE ON Emp_tab
FOR EACH ROW
...
In this case it must have been an insert or an update statement on the emp_tab table.
To find out if it was an update or an insert
if inserting then
...
elsif updating then
...
end if;
The exact column values are available in the :old and :new pseudo-columns.