Can a insert operation make another DDL operation wait? - oracle

I am trying to understand the reason why i get the below error.
`ORA-04021: timeout occurred while waiting to lock object`
This error is thrown from a procedure while running the command alter table <<T_NAME>> truncate subpartition <<SUBPARTITION_NAME>>.
v_dyncursor_stmt := 'with obj as (select /*+ materialize */ data_object_id, subobject_name from user_objects where object_name = UPPER(''' ||
p_table_name ||
''') and object_type = ''TABLE SUBPARTITION'') select '||p_hint||' distinct subobject_name from ' ||
p_table_name || ' t, obj where data_object_id = DBMS_MView.PMarker(t.rowid) and ' || p_where;
/* log */
log_text(v_unit_name, 'INFO', 'Open cursor', v_dyncursor_stmt);
/* loop over partitions which needs to be truncated */
v_counter := 0;
open c_subpartitions for v_dyncursor_stmt;
loop
FETCH c_subpartitions
INTO v_subpartition_name;
EXIT WHEN c_subpartitions%NOTFOUND;
v_statement := 'alter table ' || p_table_name || ' truncate subpartition "' || v_subpartition_name || '"';
execStmt(v_statement);
the code is calling above procedure twice and the first attempt is successful. it truncates the subpartition fine. In the second attempt it is failing... The execStmt function is given below, the error is thrown from EXCEUTE IMMEDITE line...
procedure execStmt(p_statement IN VARCHAR2) IS
v_unit_name varchar2(1024) := 'execStmt';
v_simulate varchar2(256);
begin
v_simulate := utilities.get_parameter('PART_PURGE_SIMULATE', '0');
if (v_simulate = '1') then
log_text(v_unit_name, 'INFO', 'Statement skipped. (PART_PURGE_SIMULATE=1)',
p_statement);
else
/* log */
log_text(v_unit_name, 'INFO', 'Executing statement', p_statement);
EXECUTE IMMEDIATE p_statement;
end if;
end;
As this happens mostly over the weekend, i do not get a chance to inspect the lock tables to see what has locked the object. but i know for sure that it is a table which has alot of inserts happening.
So my question is can an insert operation on a table prevent the above DDL ??
from oracle docs,i see that an insert aquires a SX lock which is explained as below,
A row exclusive lock (RX), also called a subexclusive table lock (SX), indicates that the transaction holding the lock has updated table rows or issued SELECT ... FOR UPDATE. An SX lock allows other transactions to query, insert, update, delete, or lock rows concurrently in the same table. Therefore, SX locks allow multiple transactions to obtain simultaneous SX and SS locks for the same table.

This error happens because partition you are trying to truncate is in use at that time. And as mentioned by you, these insert statements are running that time, and it can affect DDL operation.

Related

How to delete sequences and procedures during logoff trigger?

Could you please help me in a unique situation I am in. I am receiving "ORA-30511: invalid DDL operation in system triggers" when dropping sequences and procedures during logoff trigger.
I need to delete tables, sequences and procedures of users before logoff event happens. I am writing the table details in DB_OBJECTS table upon create using a separate trigger. Below is my logoff trigger - could you please help me where I am doing wrong. Dropping tables is working fine in the below code. Only Dropping sequences and procedures is giving me "ORA-30511: invalid DDL operation in system triggers" error.
CREATE OR REPLACE TRIGGER DELETE_BEFORE_LOGOFF
BEFORE LOGOFF ON DATABASE
DECLARE
USER_ID NUMBER := SYS_CONTEXT('USERENV', 'SESSIONID');
BEGIN
FOR O IN (SELECT USER, OBJECT_NAME, OBJECT_TYPE
FROM DB_OBJECTS WHERE SID = USER_ID
AND USERNAME = USER AND SYSDATE > CREATED_DTTM) LOOP
IF O.OBJECT_TYPE = 'TABLE' THEN
EXECUTE IMMEDIATE 'DROP TABLE ' || O.USER || '.' || O.OBJECT_NAME || ' CASCADE CONSTRAINTS';
ELSIF O.OBJECT_TYPE = 'SEQUENCE' THEN
EXECUTE IMMEDIATE 'DROP SEQUENCE ' || O.USER || '.' || O.OBJECT_NAME;
ELSIF O.OBJECT_TYPE = 'PROCEDURE' THEN
EXECUTE IMMEDIATE 'DROP PROCEDURE ' || O.USER || '.' || O.OBJECT_NAME;
END IF;
END LOOP;
EXCEPTION WHEN NO_DATA_FOUND THEN NULL;
END;
/
That's a simple one.
Error code: ORA-30511
Description: invalid DDL operation in system triggers
Cause: An attempt was made to perform an invalid DDL operation in a system trigger. Most DDL operations currently are not supported in system triggers. The only currently supported DDL operations are table operations and ALTER/COMPILE operations.
Action: Remove invalid DDL operations in system triggers.
That's why only
Dropping tables is working fine
succeeded.
Therefore, you can't do that using trigger.
You asked (in a comment) how to drop these objects, then. Manually, as far as I can tell. Though, that's quite unusual - what if someone accidentally logs off? You'd drop everything they created. If you use that schema for educational purposes (for example, every student gets their own schema), then you could create a "clean-up" script you'd run once class is over. Something like this:
SET SERVEROUTPUT ON;
DECLARE
l_user VARCHAR2 (30) := 'SCOTT';
l_str VARCHAR2 (200);
BEGIN
IF USER = l_user
THEN
FOR cur_r IN (SELECT object_name, object_type
FROM user_objects
WHERE object_name NOT IN ('EMP',
'DEPT',
'BONUS',
'SALGRADE'))
LOOP
BEGIN
l_str :=
'drop '
|| cur_r.object_type
|| ' "'
|| cur_r.object_name
|| '"';
DBMS_OUTPUT.put_line (l_str);
EXECUTE IMMEDIATE l_str;
EXCEPTION
WHEN OTHERS
THEN
NULL;
END;
END LOOP;
END IF;
END;
/
PURGE RECYCLEBIN;
It is far from being perfect; I use it to clean up my Scott schema I use to answer questions on various sites so - once it becomes a mess, I run that PL/SQL code several times (because of possible foreign key constraint).
Other option is to keep a create user script(s) (along with all grant statements) and - once class is over - drop existing user and simply recreate it.
Or, if that user contains some pre-built tables, keep export file (I mean, result of data pump export) and import it after the user is dropped.
There are various options - I don't know whether I managed to guess correctly, but now you have something to think about.

Handle NULL within Oracle PL/SQL

Data is imported into data_import table through sql insert statements. These insert statements may not be the most accurate so I want to let all data in to the table (hence no PK to prevent duplicates)
The code checks the contents of the data_import table for duplicate instances using a count on occurrences any occurrences are mark as in error.
DECLARE
CURSOR C_DUPLICATE_IMPORT_IDS
IS
SELECT COUNT (D.LEARNER_ID), D.LEARNER_ID
FROM ILA500.DATA_IMPORT D
WHERE D.USER_ID IN (SELECT S.OSUSER
FROM V$SESSION S
WHERE S.SID IN (SELECT DISTINCT V.SID
FROM V$MYSTAT V))
AND D.ERROR_FLAG = 'N'
AND D.LEARNER_ID <> NULL
HAVING COUNT (D.LEARNER_ID) > 1
GROUP BY D.LEARNER_ID;
V_DUPLICATE_IMPORT_IDS C_DUPLICATE_IMPORT_IDS%ROWTYPE;
BEGIN
OPEN C_DUPLICATE_IMPORT_IDS;
LOOP
FETCH C_DUPLICATE_IMPORT_IDS INTO V_DUPLICATE_IMPORT_IDS;
EXIT WHEN C_DUPLICATE_IMPORT_IDS%NOTFOUND;
UPDATE ILA500.DATA_IMPORT D
SET D.ERROR_FLAG = 'Y'
WHERE D.LEARNER_ID = V_DUPLICATE_IMPORT_IDS.LEARNER_ID;
UPDATE ILA500.DATA_IMPORT D
SET D.IMPORT_NOTIFICATION =
'DUPLICATE LEARNER_ID IDENTIFIED ('
|| V_DUPLICATE_IMPORT_IDS.LEARNER_ID
|| '). LEARNER_IDS ERROR_FLAG SET, THIS CASE WILL NOT IMPORT UNTIL CORRECTED.'
WHERE D.LEARNER_ID = V_DUPLICATE_IMPORT_IDS.LEARNER_ID;
IF V_DUPLICATE_IMPORT_IDS.LEARNER_ID IS NOT NULL
THEN
DBMS_OUTPUT.PUT_LINE (
'THE FOLLOWING LEARNER WAS IDENTIFIED AS A DUPLICATE '
|| V_DUPLICATE_IMPORT_IDS.LEARNER_ID);
ELSE
DBMS_OUTPUT.PUT_LINE (
CHR (10)
|| 'THERE ARE NO DUPLICATE LEARNER_IDS WITHIN THIS UPLOAD.');
END IF;
END LOOP;
CLOSE C_DUPLICATE_IMPORT_IDS;
DBMS_OUTPUT.PUT_LINE (CHR (10) || 'STEP 1 COMPLETED');
COMMIT;
END;
My problem comes within the IF statement
IF V_DUPLICATE_IMPORT_IDS.LEARNER_ID IS NOT NULL
THEN
DBMS_OUTPUT.PUT_LINE (
'THE FOLLOWING LEARNER WAS IDENTIFIED AS A DUPLICATE '
|| V_DUPLICATE_IMPORT_IDS.LEARNER_ID);
ELSE
DBMS_OUTPUT.PUT_LINE (
CHR (10)
|| 'THERE ARE NO DUPLICATE LEARNER_IDS WITHIN THIS UPLOAD.');
If the count is greater than 1 then data is returned and the
DBMS_OUTPUT.PUT_LINE ('THE FOLLOWING LEARNER WAS IDENTIFIED AS A DUPLICATE '|| DUPLICATE_IMPORT_IDS.LEARNER_ID);
statement successfully outputs the line .
However if a null count is returned within the query the
ELSE (DBMS_OUTPUT.PUT_LINE (
CHR (10)
|| 'THERE ARE NO DUPLICATE LEARNER_IDS WITHIN THIS UPLOAD.');)
doesn't output the line.
How to get a dmbs_output to produce data from a null?
However if a null count is returned ...
A null count can't be returned. The COUNT (D.LEARNER_ID) in your cursor query can't evaluate to null; it could produce zero but you're filtering any such results with your HAVING clause (which would also exclude null, if it could happen). And a null ID can't be returned because of how you are counting the values.
Your cursor query includes:
AND D.LEARNER_ID <> NULL
which isn't right; null isn't equal or unequal to anything, so this excludes all rows; you could do:
AND D.LEARNER_ID IS NOT NULL
but it's redundant anyway because you're doing COUNT(D.LERARNER_ID), which won't count nulls. It would make a difference if you were doing COUNT(*) though. db<>fiddle with just the cursor query, showing that effect.
Anyway... if you wanted to display a message for each ID if that had no duplicates you could remove the HAVING clause and test for non-zero instead of not-null; but from the text it looks like you want a single message if your cursor finds no rows at all. You can handle that with a 'found' flag which you set to true if you go into the cursor loop, and then test after the loop:
DECLARE
CURSOR C_DUPLICATE_IMPORT_IDS
IS
...
V_DUPLICATE_IMPORT_IDS C_DUPLICATE_IMPORT_IDS%ROWTYPE;
V_FOUND BOOLEAN := FALSE;
BEGIN
OPEN C_DUPLICATE_IMPORT_IDS;
LOOP
FETCH C_DUPLICATE_IMPORT_IDS INTO V_DUPLICATE_IMPORT_IDS;
EXIT WHEN C_DUPLICATE_IMPORT_IDS%NOTFOUND;
...
DBMS_OUTPUT.PUT_LINE (
'THE FOLLOWING LEARNER WAS IDENTIFIED AS A DUPLICATE '
|| V_DUPLICATE_IMPORT_IDS.LEARNER_ID);
V_FOUND := TRUE;
END LOOP;
CLOSE C_DUPLICATE_IMPORT_IDS;
IF NOT V_FOUND THEN
DBMS_OUTPUT.PUT_LINE (
CHR (10)
|| 'THERE ARE NO DUPLICATE LEARNER_IDS WITHIN THIS UPLOAD.');
END IF;
DBMS_OUTPUT.PUT_LINE (CHR (10) || 'STEP 1 COMPLETED');
COMMIT;
END;
db<>fiddle with and without matching data.
You could also simplify this a bit with an implicit cursor loop but the 'found' logic would be the same.
In the past, when a DML statement fails the whole statement is rolled back, regardless of how many rows were processed successfully before the error was detected. the only way around this problem was to process each row individually. There's a DBMS_ERRLOG package that provides a procedure that enables you to create an error logging table so that DML operations can continue after encountering errors rather than abort and rollback. This enables you to save time and system resources.
You can use this package and set the primary key in your table. For example:
-- First, create the error logging table.
BEGIN
DBMS_ERRLOG.create_error_log (dml_table_name => 'dest_table');
END;
Then in your insert staement do this:
INSERT INTO dest_table
SELECT *
FROM source_table
LOG ERRORS INTO err$_dest_table ('INSERT') REJECT LIMIT UNLIMITED;
Finally, You don't need these codes.
For more information read here and here.

Oracle PL/SQL how do you output how many inserts have been made in a FORALL statement

What's the best way of getting and outputting how many rows have been inserted in the FORALL statement I have below. I've seen the SQL%BULK_ROWCOUNT but I'm not sure how that would work in the below statement.
is it
DBMS_OUTPUT.('rows inserted '||SQL%BULK_ROWCOUNT||'');
Does the above need to go in another FORALL statement? For the code below how would I achieve this?
DECLARE
TYPE t_arc_act_plus_trigger1 IS TABLE OF arc_act_plus_triggers1%ROWTYPE;
v_arc_act_plus_triggers1 t_arc_act_plus_trigger1;
CURSOR c_arc_act_plus_triggers1 IS
SELECT /*+ PARALLEL */ apt.*
FROM act_plus_triggers1 apt
WHERE NOT EXISTS
(SELECT 1
FROM act_plus_triggers_copy1 aptc
WHERE aptc.surr_id = apt.surr_id)
AND apt.status IN ('EXT', 'EXP');
BEGIN
OPEN c_arc_act_plus_triggers1;
LOOP
FETCH c_arc_act_plus_triggers1 BULK COLLECT INTO v_arc_act_plus_triggers1 LIMIT 10000; -- limit to 10k to avoid out of memory
FORALL i IN 1..v_arc_act_plus_triggers1.COUNT
INSERT /*+ APPEND_VALUES */ INTO arc_act_plus_triggers1 values v_arc_act_plus_triggers1(i);
Com0932.get_parameter ('ACT_ARCHIVE_TRIGGER_STOP_YN',l_STOP_PROGRAM_YN);
IF l_STOP_PROGRAM_YN = 'Y' THEN
p_location('insert_into_arc_act_plus - STOP_PROGRAM_YN flag = '||l_STOP_PROGRAM_YN||' so ROLLBACK');
ROLLBACK;
EXIT;
END IF;
-- **************************************************
-- Output how many records have been inserted here???
-- **************************************************
-- commit after every 10000 records into arc_act_plus_triggers1
COMMIT;
EXIT WHEN c_arc_act_plus_triggers1%NOTFOUND;
END LOOP;
CLOSE c_arc_act_plus_triggers1;
END;
I haven't checked as I have nothing to test against so please forgive any 'missing semi-colon type errors' and I'm afraid I'm not in a position to performance check this.
Your code seems to select which rows to insert to the archive table based on there non-existence in the archive. Therefore simply use an INSERT based on a SELECT limited by a suitable ROWNUM value. Once you commit then the next time round the loop it wont try getting already archived rows as you just committed them.
I think this should be as quick if not quicker than bulkifying the inserts with the advantage that its simpler - Occams Razor and all that.
DECLARE
l_commit_count NUMBER := 10000;
l_rows_copied NUMBER := 0;
BEGIN
DBMS_OUTPUT.PUT_LINE('Started at '||TO_DATE(SYSDATE, 'DD_MON_YYY HH24:MI:SS');
LOOP
INSERT /*+APPEND */
INTO c_arc_act_plus_triggers1
SELECT /*+ PARALLEL */ apt.*
FROM act_plus_triggers1 apt
WHERE NOT EXISTS
(SELECT 1
FROM act_plus_triggers_copy1 aptc
WHERE aptc.surr_id = apt.surr_id)
AND apt.status IN ('EXT', 'EXP')
AND rownum < l_commit_count;
COMMIT;
l_rows := l_rows + SQL%ROWCOUNT;
EXIT WHEN SQL%ROWCOUNT < 1;
END LOOP
DBMS_OUTPUT.PUT_LINE('Finished at '||TO_DATE(SYSDATE, 'DD_MON_YYY HH24:MI:SS');
DBMS_OUTPUT.PUT_LINE(TO_CHAR(l_rows)||' rows copied to the archive table');
END;

Fetching the record one by one dynamically Oracle

I am creating a dynamic procedure which could accept 2 table names.Fetch the records from one table and after certain record (let's say 100 records) i have to issue the commit command.
Both tabName and temp_tabName are always be identical.Since I have billions of records in first table i am doing the commit after every 10000 records in order to get rid of undo table space problem.
Till now what i did is :
CREATE OR REPLACE PROCEDURE MyProdecure (
tabName IN USER_TABLES.table_name%TYPE,
temp_tabName IN USER_TABLES.table_name%TYPE
)
IS
v_sql VARCHAR2 (100) := 'select * from ' || tabName;
TEMP_CURSOR SYS_REFCURSOR;
COUNT NUMBER (6) := 0;
BEGIN
OPEN TEMP_CURSOR FOR v_sql;
LOOP
FETCH TEMP_CURSOR INTO V_ROW;
--=================================================================================
/*
* I need the code here to fetch the 100 record from TEMP_CURSOR into a Variable
* and insert into the second table. or one record increment the count and if
* count>= 100 commit
*What would be the data type of V_ROW. How to fetch the data from V_ROW and complete the insert into command.
*/
--================================================================================
EXIT WHEN TEMP_CURSOR%NOTFOUND;
END LOOP;
CLOSE TEMP_CURSOR;
END MyProdecure;
There is no way to define V_ROW in such a way as to make your PL/SQL block work correctly for an input table whose name and structure is not known until runtime.
To make your approach work, you would need to use DBMS_SQL.
Have you considered a variation of the following, to bypass the vast majority of the UNDO generation?
CREATE OR REPLACE PROCEDURE MyProcedure (
tabName IN USER_TABLES.table_name%TYPE,
temp_tabName IN USER_TABLES.table_name%TYPE
)
IS
l_log_io NUMBER;
C_BLOCK_SIZE NUMBER := 8192; -- assuming 8192 byte block size
l_undo_bytes NUMBER;
BEGIN
EXECUTE IMMEDIATE 'INSERT /*+ APPEND */ INTO ' || temp_tabName ||
' SELECT * FROM ' || tabName;
select t.log_io, t.used_ublk*C_BLOCK_SIZE undo_bytes
into l_log_io, l_undo_bytes
from v$transaction t
where t.addr = ( SELECT s.taddr FROM v$session s WHERE s.sid = USERENV('SID'));
dbms_output.put_line('Undo bytes used: ' || l_undo_bytes);
END;
INSERT /*+ APPEND */ comes with a number of caveats that you should look into before using it, but it could be a much simpler way of accomplishing your goal.

Can I see the DML inside an Oracle trigger?

Is it possible to see the DML (SQL Statement) that is being run that caused a trigger to be executed?
For example, inside an INSERT trigger I would like to get this:
"insert into myTable (name) values ('Fred')"
I read about ora_sql_txt(sql_text) in articles such as this but couldn't get it working - not sure if that is even leading me down the right path?
We are using Oracle 10.
Thank you in advance.
=========================
[EDITED] MORE DETAIL: We have the need to replicate an existing database (DB1) into a classified database (DB2) that is not accessible via the network. I need to keep these databases in sync. This is a one-way sync from (DB1) to (DB2), since (DB2) will contain additional tables and data that is not contained in the (DB1) system.
I have to determine a way to sync these databases without bringing them down (say, for a backup and restore) because it needs to stay live. So I thought that if I can store the actual DML being run (when data changes), I could "play-back" the DML on the new database to update it, just like someone was hand-entering it back in.
I can't bring over all the data because of the sheer size of it, and I can't just copy over the changed records because of FK constraints and the order in which I insert/update records. I figured that if I could "play-back" a log of what happened, using the exact SQL that changed the master, I could keep the databases in sync.
My current plan of attack was to keep a log of all records that were changed, inserted, and deleted and when I want to sync, the system generates DML to insert/update/delete those records. Then I just take the .SQL file to the classified system and run the script. The problem I'm running into are FKs. (Because when I generate the DML I only know what the current state of the data is, not it's path to get there - so ordering of statements is an issue). I guess I could disable all FK's, do the merge, then re-enable all FK's...
So - does my approach of storing the actual DML as-it-happens suck pondwater, or is there a better solution???
"does my approach of storing the actual DML as-it-happens suck pondwater?" Yes..
Strict ordering of the DML on your DB1 does not really exist. Multiple processes, muiltiple cores, things essentially happening at the essentially the same time.
And the DML, even when it happens sequentially doesn't act like it. Say the following two update statements run in seperate processes with seperate transactions, where the update in transaction 2 starts before transaction 1 commits:
update table_a set col_a = 10 where col_b = 'A' -- transaction 1
update table_a set col_c = 'Error' where col_a = 10 -- transaction 2
Since the changes made in the first transaction are not visibible to the second transaction, the rows changed by the second transaction will not include those of the first. But if you manage to capture the DML and replay it sequentially, transaction 1's changes will be visible, so transaction 2's changes will be different. (See pages 40 and 41 of Tom Kyte's Expert Oracle Database Architecture Second Edition.)
Hopefully you are using bind variables, so the DML by itself wouldn't be meaningful: update table_a set col_a = :col_a where id = :id Now what? Ok, so you want the DML with it's variable bindings.
Do you use sequences? If so, the next_val will not stay in synch between DB1 and DB2. (For example, instance failures can cause lost values, are both systems going to fail at the same time?) And if you are dealing with RAC, where the next_val varies depending on node, forget it.
I would start by investigating Oracle's replication.
I had a situation where I needed to move metadata/configuration changes (stored in a handful of tables) from a development environment to a production environment once tested. Something like Goldengate is the product to use for this but this can be costly and complicated to set up and administer.
The following procedure generates a trigger and attaches it to a table that needs the DML saved. The trigger re-creates the DML and in the following case saves it to an audit table - its up to you what you do with it. You can use the statements saved to the audit table to replay changes from a given point in time (cut and paste or develop a procedure to apply them to the target).
Hope you find this useful.
procedure gen_trigger( p_tname in varchar2 )
is
l_theCursor integer default dbms_sql.open_cursor;
l_query varchar2(1000) default 'select * from ' || p_tname;
l_colCnt number := 0;
l_descTbl dbms_sql.desc_tab;
trg varchar(32767) := null;
expr varchar(32767) := null;
cmd varchar(32767) := null;
begin
dbms_sql.parse( l_theCursor, l_query, dbms_sql.native );
dbms_sql.describe_columns( l_theCursor, l_colCnt, l_descTbl );
trg := q'#
create or replace trigger <%TABLE_NAME%>_audit
after insert or update or delete on <%TABLE_NAME%> for each row
declare
qs varchar2(20) := q'[q'^]';
qe varchar2(20) := q'[^']';
command clob;
nlsd varchar2(100);
begin
select value into nlsd from nls_session_parameters where parameter = 'NLS_DATE_FORMAT';
execute immediate 'alter session set nls_date_format = ''YYYY/MM/DD hh24:mi:ss'' ';
if inserting then
command := <%INSERT_COMMAND%>;
end if;
if updating then
command := <%UPDATE_COMMAND%>;
end if;
if deleting then
command := <%DELETE_COMMAND%>;
end if;
insert into x_audit values (systimestamp, command);
execute immediate q'+alter session set nls_date_format = '+'|| nlsd || q'+'+';
end;
#';
-- Create the insert command
cmd := q'#'insert into <%TABLE_NAME%> (<%INSERT_COLS%>) values ('||<%INSERT_VAL%>||')'#';
-- columns clause
for i in 1 .. l_colCnt loop
if expr is not null then
expr := expr || ',';
end if;
expr := expr || l_descTbl(i).col_name;
end loop;
cmd := replace(cmd,'<%INSERT_COLS%>',expr);
-- values clause
expr := null;
for i in 1 .. l_colCnt loop
if expr is not null then
expr := expr || q'#||','||#';
end if;
expr := expr || 'qs||:new.' || l_descTbl(i).col_name || '||qe';
end loop;
cmd := replace(cmd,'<%INSERT_VAL%>',expr);
trg := replace(trg,'<%INSERT_COMMAND%>',cmd);
-- create the update command
-- set clause
expr := null;
cmd := q'#'update <%TABLE_NAME%> set '||<%UPDATE_COLS%>||' where '||<%WHERE_CLAUSE%>#';
for i in 1 .. l_colCnt loop
if expr is not null then
expr := expr || q'#||','||#';
end if;
expr := expr || q'#'#' || l_descTbl(i).col_name || q'# = '||#'|| 'qs||:new.'||l_descTbl(i).col_name || '||qe';
end loop;
null;
cmd := replace(cmd,'<%UPDATE_COLS%>',expr);
trg := replace(trg,'<%UPDATE_COMMAND%>',cmd);
-- create the delete command
expr := null;
cmd := q'#'delete <%TABLE_NAME%> where '||<%WHERE_CLAUSE%>#';
trg := replace(trg,'<%DELETE_COMMAND%>',cmd);
-- where clause using primary key columns (used by update and delete)
expr := null;
for pk in (SELECT column_name FROM all_cons_columns WHERE constraint_name = (
SELECT constraint_name FROM user_constraints
WHERE UPPER(table_name) = UPPER(p_tname) AND CONSTRAINT_TYPE = 'P'
)) loop
if expr is not null then
expr := expr || q'#|| ' and '||#';
end if;
expr := expr || q'#'#' || pk.column_name || q'# = '||#'|| 'qs||:old.'|| pk.column_name || '||qe';
end loop;
if expr is null then -- must have a primary key
raise_application_error(-20000,'The table must have a primary key defined');
end if;
trg := replace(trg,'<%WHERE_CLAUSE%>',expr);
trg := replace(trg,'<%TABLE_NAME%>',p_tname);
execute immediate trg;
null;
exception
when others then
execute immediate 'alter session set nls_date_format=''YYYY/MM/DD'' ';
raise;
end;
/* Example
create table t1 (
col1 varchar2(100),
col2 number,
col3 date,
constraint pk_t1 primary key (col1)
)
/
BEGIN
GEN_TRIGGER('T1');
END;
/
-- Trigger generated ....
create or replace trigger t1_audit after
insert or
update or
delete on t1 for each row
declare
qs varchar2(20) := q'[q'^]';
qe varchar2(20) := q'[^']';
command clob;
nlsd varchar2(100);
begin
select value into nlsd from nls_session_parameters where parameter = 'NLS_DATE_FORMAT';
execute immediate 'alter session set nls_date_format = ''YYYY/MM/DD hh24:mi:ss'' ';
if inserting then
command := 'insert into T1 (COL1,COL2,COL3) values ('||qs||:new.col1||qe||','||qs||:new.col2||qe||','||qs||:new.col3||qe||')';
end if;
if updating then
command := 'update T1 set '||'COL1 = '||qs||:new.col1||qe||','||'COL2 = '||qs||:new.col2||qe||','||'COL3 = '||qs||:new.col3||qe||' where '||'COL1 = '||qs||:old.col1||qe;
end if;
if deleting then
command := 'delete T1 where '||'COL1 = '||qs||:old.col1||qe;
end if;
insert into x_audit values
(systimestamp, command
);
execute immediate q'+alter session set nls_date_format = '+'|| nlsd || q'+'+';
end;
*/
That function only works for 'event' triggers as discussed here.
You should look into Fine-Grained Auditing as a mechanism for this. Details here
When the trigger code runs don't you already know the dml that caused it to run?
CREATE OR REPLACE TRIGGER Print_salary_changes
BEFORE INSERT OR UPDATE ON Emp_tab
FOR EACH ROW
...
In this case it must have been an insert or an update statement on the emp_tab table.
To find out if it was an update or an insert
if inserting then
...
elsif updating then
...
end if;
The exact column values are available in the :old and :new pseudo-columns.

Resources