Translation of DB2 Trigger to Oracle triggers ORA-04091 Error - oracle

So I have two tables: JOBS and TASKS.
The TASKS Table has a TAKS_STATUS Column that stores a progression of states (e.g. 'Submitting', 'Submitted', 'Authored', 'Completed').
The JOBS Table has a JOB_STATUS Column which is a roll-up (i.e. minimum state) of the TASK_STATUS Columns in the TASKS Table. The JOBS Table also has a TASK_COUNT value that contains the number of TASKS associated with a Job.
Jobs can have one or more tasks: JOB_ID in each table links them.
In DB2, I have a series of simple triggers to roll-up this state; here's one in the middle:
create or replace trigger JOB_AUTHORED
after update of TASK_STATUS on TASKS
referencing NEW as N
for each row
when (TASK_STATUS = 'Authored')
update JOBS
set JOB_STATUS = 'Authored'
where JOBS.JOB_ID = N.JOB_ID
and TASK_COUNT=(
select count(0) from TASKS
where TASKS.JOB_ID = N.JOB_ID
and TASKS.TASK_STATUS in ('Authored','Completed'))
This works fine in DB2 because the trigger runs in the same work unit as the triggering event and thus it can see the work unit's uncommitted changes and can count the TASK_STATUS change that just occurred without hitting a row-lock.
Here's the translated trigger in Oracle:
create or replace trigger JOB_AUTHORED
after update of TASK_STATUS on TASKS
for each row
when (NEW.TASK_STATUS = 'Authored')
BEGIN
update JOBS
set JOB_STATUS='Authored'
where JOBS.JOB_ID = :NEW.JOB_ID and TASK_COUNT=(
select count(0) from TASKS
where TASKS.JOB_ID = :NEW.JOB_ID
and TASKS.TASK_STATUS in ('Authored','Completed'));
END;
In Oracle this fails with:
ORA-04091: table MYSCHEMA.TASKS is mutating, trigger/function may not see it#012ORA-06512: at "MYSCHEMA.JOB_AUTHORED", line 1#012ORA-04088: error during execution of trigger 'MYSCHEMA.JOB_AUTHORED'#012] [query: UPDATE TASKS SET TASK_STATUS=:1 where TASK_ID=:2
Apparently Oracle's Triggers do not run in the same context, can not see the uncommitted triggering updates and thus will never be able to count the number of tasks in specific states which include the triggering row.
I guess I could change the AFTER Trigger to an INSTEAD OF Trigger and update the TASK_STATUS (as well as the JOB_STATUS) within the Trigger (so the Job Update can see the Task Update), but would I run into the same error? Maybe not the first Task Update, but what if the triggering program is updating a bunch of TASKS before committing: what happens when the second Task is updated?
I've also considered removing the Trigger and have a program scan active Jobs for the status of its Tasks, but that seems inelegant.
What is the Best Practice with something like this in Oracle?

The best practice is to avoid triggers if you can.
See these links for an answer why one shoud not use triggers:
http://www.oracle.com/technetwork/testcontent/o58asktom-101055.html
http://rwijk.blogspot.com/2007/09/database-triggers-are-evil.html
Use a procedure (an API) in place of triggers - you can create a package with a few procedures like add_new_job, add_new_task, change_task_status etc., and place all logic in these procedures (checking, changing tasks' states, changing jobs' states etc.) in one place. Easy to understand, easy to maintain, easy to debug and easy to tracking errors.
If you insist on using triggers, then you can create the compound trigger, as Tom Kyte mentioned in the frist link above as workaround, for example:
create or replace TRIGGER JOB_AUTHORED
FOR UPDATE OF TASK_STATUS on TASKS
COMPOUND TRIGGER
TYPE JOB_ID_TABLE_TYPE IS TABLE OF PLS_INTEGER INDEX BY PLS_INTEGER;
JOB_ID_TABLE JOB_ID_TABLE_TYPE;
dummy CHAR;
AFTER EACH ROW IS
BEGIN
-- SELECT null INTO dummy FROM JOBS WHERE job_id = :NEW.JOB_ID FOR UPDATE;
JOB_ID_TABLE( :NEW.JOB_ID ) := :NEW.JOB_ID;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
FORALL x IN INDICES OF JOB_ID_TABLE
UPDATE jobs set JOB_STATUS='Authored'
WHERE JOBS.JOB_ID = JOB_ID_TABLE( x )
and TASK_COUNT=(
select count(0) from TASKS
where TASKS.JOB_ID = JOBS.JOB_ID
and TASKS.TASK_STATUS in ('Authored','Completed')
);
END AFTER STATEMENT;
END JOB_AUTHORED;
but .....
I am not sure if there are no any pitfals in this example we are not aware at this time.
For example one pitfall is in this scenario:
Suposse there are 18 tasks with status Authored
At time X user A runs UPDATE TASK SET status ' Authored' WHERE
task_id = 2. The trigger is fired and sees 18+1 commited tasks with
status Authored
At time X+10ms user B runs UPDATE TASK1 SET
status ' Authored' task_id = 4. The trigger is fired and sees 18+1
commited tasks with status Authored
At time X+20ms user A
commits
At time X+30ms user B commits
At the end we have 21 tasks
with status authored. but job's status have not been changed to
Authored (but should be changed to Authored if number of tasks = 20).
To avoid this trap you can use SELECT null INTO dummy FROM JOBS WHERE job_id = :NEW.JOB_ID FOR UPDATE; in the after each row part of the trigger in order to place a lock at corresponding record in JOBS table in order to serialize access (I am commented it in the example above).
But I'am still not sure if this is correct solution - it may cause in turn some deadlocks in scenarious I cannot imagine and predict at this time.

In short - in Oracle, in triggers you cannot select from table on which trigger is constructed. Otherwise you may/you will get mutating table error.
You have several options:
1) No triggers at all - in my humble opinion this is the best, I would do it this way (but probably topic is wider and we don't know everything).
Create view, which replaces the need of triggers, something like:
create or replace view v_jobs_done as
select * from jobs where not exists
select 1 from tasks
where TASKS.JOB_ID = jobs.JOB_ID
and TASKS.TASK_STATUS not in ('Authored','Completed')
2) Instead of increasing value use decreasing value - so when jobs.tasks_count reaches zero you know everything's done. In this case you have to build/rebuild your other triggers,
3) Proposition close to yours - you can use modern compound trigger - I doubt in performance here, but it works:
create or replace trigger Job_Authored
for update of task_status on tasks compound trigger
type t_ids is table of tasks.job_id%type;
type t_cnts is table of number;
type t_job_counts is table of number index by varchar2(10);
v_ids t_ids;
v_cnts t_cnts;
v_job_counts t_job_counts;
before statement is
begin
select job_id, count(1)
bulk collect into v_ids, v_cnts
from tasks where tasks.task_status in ('Authored','Completed')
group by job_id;
for i in 1..v_ids.count() loop
v_job_counts(v_ids(i)) := v_cnts(i);
end loop;
end before statement;
after each row is
begin
if :new.task_status = 'Authored' then
update jobs set job_status='Authored'
where job_id = :new.job_id
and task_count = v_job_counts(:new.job_id);
end if;
end after each row;
end Job_Authored;

The best practice is to avoid triggers if you can.
See this links for an answer why one shoud not use triggers:
http://www.oracle.com/technetwork/testcontent/o58asktom-101055.html
http://rwijk.blogspot.com/2007/09/database-triggers-are-evil.html
Use a procedure (an API) in place of triggers - you can create a package with a few procedures like add_new_job, add_new_task, change_task_status etc., and place all logic in these procedures (checking, changing tasks' states, changing jobs' states etc.) in one place. Easy to understand, easy to maintain, easy to debug and easy to track errors.
If you insist on using triggers, then you can create the compound trigger, as Tom Kyte mentioned in the frist link above as workaround, for example:
create or replace TRIGGER JOB_AUTHORED
FOR UPDATE OF TASK_STATUS on TASKS
COMPOUND TRIGGER
TYPE JOB_ID_TABLE_TYPE IS TABLE OF PLS_INTEGER INDEX BY PLS_INTEGER;
JOB_ID_TABLE JOB_ID_TABLE_TYPE;
dummy CHAR;
AFTER EACH ROW IS
BEGIN
-- SELECT null INTO dummy FROM JOBS WHERE job_id = :NEW.JOB_ID FOR UPDATE;
JOB_ID_TABLE( :NEW.JOB_ID ) := :NEW.JOB_ID;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
FORALL x IN INDICES OF JOB_ID_TABLE
UPDATE jobs set JOB_STATUS='Authored'
WHERE JOBS.JOB_ID = JOB_ID_TABLE( x )
and TASK_COUNT=(
select count(0) from TASKS
where TASKS.JOB_ID = JOBS.JOB_ID
and TASKS.TASK_STATUS in ('Authored','Completed')
);
END AFTER STATEMENT;
END JOB_AUTHORED;
but .....
I am not sure if there are no any pitfals in this example we are not aware at this time.
For example one pitfall is in this scenario:
Suposse there are 18 tasks with status Authored
At time X user A runs UPDATE TASK SET status ' Authored' WHERE
task_id = 2. The trigger is fired and sees 18+1 commited tasks with
status Authored
At time X+10ms user B runs UPDATE TASK1 SET
status ' Authored' task_id = 4. The trigger is fired and sees 18+1
commited tasks with status Authored
At time X+20ms user A
commits
At time X+30ms user A commits
At the end we have 21 tasks
with status authored. but job's status have not been changed to
Authored (but should be changed to Authored if number of tasks = 20).
To avoid this trap you can use SELECT null INTO dummy FROM JOBS WHERE job_id = :NEW.JOB_ID FOR UPDATE; in the after each row part of the trigger in order to place a lock at corresponding record in JOBS table in order to serialize access (I am commented it in the example above).
But I'am still not sure if this is correct solution - it may cause in turn deadlocks in scenarious I cannot imagine and predict at now.

Related

Update table using multi record block in post forms commit

what I wanted is to update my table using values from a multi record block and here is what I tried in post forms commit:
BEGIN
FIRST_RECORD;
LOOP
UPDATE table1
SET ord_no = :blk.new_val;
EXIT WHEN :SYSTEM.LAST_RECORD='TRUE';
NEXT_RECORD;
END LOOP;
END;
but when I save I got an error
FRM-40737: Illegal restricted procedure
FIRST-RECORD in POST-FORMS-COMMIT trigger.
OK, a few things to talk about here
1) I'm assuming that 'table1' is NOT the table on which the block is based. If the block was based on table1, simply as the user edits entries on screen, then when you do a COMMIT_FORM command (or the users clicks Save) then the appropriate updates will be done for you automatically. (That is the main use of a database-table based block).
2) So I'm assuming that 'table1' is something OTHER than the block based table. The next thing is that you probably need a WHERE clause on your update statement. I assume that you are updating a particular in table1 based on a particular value from the block? So it would be something like:
update table1
set ord_no = :blk.new_val
where keycol = :blk.some_key_val
3) You cannot perform certain navigation style operations when in the middle of a commit, hence the error. A workaround for this is to defer the operation until the completion of the navigation via a timer. So your code is something like:
Declare
l_timer timer;
Begin
l_timer := find_timer('DEFERRED_CHANGES');
if not id_null(l_timer) then
Delete_Timer(l_timer);
end if;
l_timer := Create_Timer('DEFERRED_CHANGES', 100, no_Repeat);
End;
That creates a timer that will fire once 100ms after your trigger completes (choose name, and time accordingly) and then you have your original code on a when-time-expired trigger.
But please - check out my point (1) first.

Oracle - Fetch returns more than requested number of rows - using triggers

So I am trying to use triggers to basically set some rules.. If anyone has an ID number lower than 3, he will have to pay only 100 dollars, but if someone has an ID above that, he will have to pay more. I did some research and have been told to use triggers and that triggers are very useful when fetching multiple rows. So I tried doing that but it didn't work. Basically the trigger gets created but then when i try to add values, I get the following error:-
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at "S.PRICTICKET", line 6
ORA-04088: error during execution of trigger 'S.PRICTICKET'
here is what i did to make the trigger:-
CREATE OR REPLACE TRIGGER PRICTICKET BEFORE INSERT OR UPDATE OR DELETE ON PAYS FOR EACH ROW ENABLE
DECLARE
V_PRICE PAYS.PRICE%TYPE;
V_ID PAYS.ID%TYPE;
V_NAME PAYS.NAME%TYPE;
BEGIN
SELECT ID,NAME INTO V_ID,V_NAME FROM PAYS;
IF INSERTING AND V_ID<3 THEN
V_PRICE:=100;
INSERT INTO PAYS(ID,NAME,PRICE) VALUES (V_ID,V_NAME,V_PRICE);
ELSIF INSERTING AND V_ID>=3 THEN
V_PRICE:=130;
INSERT INTO PAYS(ID,NAME,PRICE) VALUES (V_ID,V_NAME,V_PRICE);
END IF;
END;
and the thing is, when i execute this code, i actually do get a message saying the trigger has been compiled. but when when i try to insert values into the table by using the following code, i get the error message I mentioned above.
INSERT INTO PAYS(ID,NAME) VALUES (19,'SS');
You're getting the error you specified, ORA-01422, because you're returning more than one row with the following SELECT:
SELECT ID,NAME INTO V_ID,V_NAME FROM PAYS;
You need to restrict the result set. For example, I'll use the :NEW psuedorecord to grab the row's new ID value, which if unique, will restrict the SELECT to one row:
SELECT ID,NAME INTO V_ID,V_NAME FROM PAYS WHERE ID = :NEW.ID;
Here is the Oracle docs on using triggers: https://docs.oracle.com/database/121/TDDDG/tdddg_triggers.htm#TDDDG99934
However, I believe your trigger has other issues, please see my comments and we can discuss.
EDIT: Based on our discussion.
ORA-04088: error during execution of trigger
Using INSERT inside a BEFORE INSERT trigger on the same table will create an infinite loop. Please consider using an AFTER INSERT and change your INSERTS to UPDATES, or an INSTEAD OF INSERT.
Additionally, remove DELETE from the trigger definition. That makes no sense in this context.
Let's begin clearing up a few things. You were told "triggers are very useful when fetching multiple rows" this is, as a general rule and without additional context, false. There are 4 types of DML triggers:
Before Statement - fires 1 time for the statement regardless of the number of rows processed.
Before Row - fires once for each row processed during the statement before old and new values are merged into a single set of values. At this point you are allowed to change the values in the columns.
After Row - fires once for row processed during the statement after merging old and new values into a single set of values. At this point you cannot change the column values.
After statement - fires once for the statement regardless of the number of rows processed.
Keep in mind that the trigger is effectively part of the statement.
A trigger can be fired for Insert, Update, or Delete. But, there is no need to fire on each. In this case as suggested, remove the Delete. But also the Update as your trigger is not doing anything with it. (NOTE: there are compound triggers, but they contain segments for each of the above).
In general a trigger cannot reference the table that it is fired upon. See error ORA-04091.
If you're firing a trigger on an Insert it cannot do an insert into that same table (also see ORA-04091) and even if you get around that the Insert would fire the trigger, creating a recursive and perhaps a never ending loop - that would happen here.
Use :New.column_name and :Old.column_name as appropriate to refer to column values. Do not attempt to select them.
Since you are attempting to determine the value of a column you must use a Before trigger.
So applying this to your trigger the result becomes:
CREATE OR REPLACE TRIGGER PRICTICKET
BEFORE INSERT ON PAYS
FOR EACH ROW ENABLE
BEGIN
if :new.id is not null
if :new.ID<3 then
:new.Price :=100;
else
:new.Price := 130;
end if ;
else
null; -- what should happen here?
end if ;
END PRICTICKET ;

Oracle 'after create' trigger to grant privileges

I have an 'after create on database' trigger to provide select access on newly created tables within specific schemas to different Oracle roles.
If I execute a create table ... as select statement and then query the new table in the same block of code within TOAD or a different UI I encounter an error, but it works if I run the commands separately:
create table schema1.table1 as select * from schema2.table2 where rownum < 2;
select count(*) from schema1.table1;
If I execute them as one block of code I get:
ORA-01031: insufficient privileges
If I execute them individually, I don't get an error and am able to obtain the correct count.
Sample snippet of AFTER CREATE trigger
CREATE OR REPLACE TRIGGER TGR_DATABASE_AUDIT AFTER
CREATE OR DROP OR ALTER ON Database
DECLARE
vOS_User VARCHAR2(30);
vTerminal VARCHAR2(30);
vMachine VARCHAR2(30);
vSession_User VARCHAR2(30);
vSession_Id INTEGER;
l_jobno NUMBER;
BEGIN
SELECT sys_context('USERENV', 'SESSIONID'),
sys_context('USERENV', 'OS_USER'),
sys_context('USERENV', 'TERMINAL'),
sys_context('USERENV', 'HOST'),
sys_context('USERENV', 'SESSION_USER')
INTO vSession_Id,
vOS_User,
vTerminal,
vMachine,
vSession_User
FROM Dual;
insert into schema3.event_table VALUES (vSession_Id, SYSDATE,
vSession_User, vOS_User, vMachine, vTerminal, ora_sysevent,
ora_dict_obj_type,ora_dict_obj_owner,ora_dict_obj_name);
IF ora_sysevent = 'CREATE' THEN
IF (ora_dict_obj_owner = 'SCHEMA1') THEN
IF DICTIONARY_OBJ_TYPE = 'TABLE' THEN
dbms_job.submit(l_jobno,'sys.execute_app_ddl(''GRANT SELECT
ON '||ora_dict_obj_owner||'.'||ora_dict_obj_name||' TO
Role1,Role2'');');
END IF;
END IF;
END IF;
END;
Jobs are asynchronous. Your code is not.
Ignoring for the moment the fact that if you're dynamically granting privileges that something in the world is creating new tables live in production without going through a change control process (at which point a human reviewer would ensure that appropriate grants were included) which implies that you have a much bigger problem...
When you run the CREATE TABLE statement, the trigger fires and a job is scheduled to run. That job runs in a separate session and can't start until your CREATE TABLE statement issues its final implicit commit and returns control to the first session. Best case, that job runs a second or two after the CREATE TABLE statement completes. But it could be longer depending on how many background jobs are allowed to run simultaneously, what other jobs are running, how busy Oracle is, etc.
The simplest approach would be to add a dbms_lock.sleep call between the CREATE TABLE and the SELECT that waits a reasonable amount of time to give the background job time to run. That's trivial to code (and useful to validate that this is, in fact, the only problem you have) but it's not foolproof. Even if you put in a delay that's "long enough" for testing, you might encounter a longer delay in the future. The more complicated approach would be to query dba_jobs, look to see if there is a job there related to the table you just created, and sleep if there is in a loop.

trigger - select count into - ORA-04091

I want to INSERT or UPDATE on table (CMS_CONTENT_ENROLLMENT) and then do a count over the same table and update the result into another table. Sadly I'm getting a ORA-04091: table name is mutating, trigger/function may not see it
CREATE OR REPLACE TRIGGER CMS_CONTENT_ENROLLMENT_CNT_TR
AFTER INSERT OR UPDATE
ON CMS_CONTENT_ENROLLMENT
FOR EACH ROW
DECLARE
BEGIN
UPDATE CMS_CONTENT CNT
SET CNT.REGISTRATIONCOUNT =
(
SELECT COUNT (ENROLLMENTID)
FROM CMS_CONTENT_ENROLLMENT
WHERE DELETED = 0 AND CONTENTID = CNT.CONTENTID
)
WHERE CNT.CONTENTID = :NEW.CONTENTID;
END;
/
Words of warning: I don't think it's a good idea to embed complex application logic code in triggers because the code is spread among many objects and it makes it hard to test and maintain. Usually the ORA-04091 is an indication that you are trying to perform something too complex for triggers. This can be circumvented but will result in complex logic often flawed in some ways, especially in a multi-user environment like a database. For this reason if you want a summary table I would advice a regular view, a materialized view or procedural logic.
If you want to go with triggers anyway, you could make it work with incremental logic, like this (untested, assumes that you never update contendid, that deleted is NOT NULL, that you never actually delete from the table and that the row exists in the summary table):
CREATE OR REPLACE TRIGGER CMS_CONTENT_ENROLLMENT_CNT_TR
AFTER INSERT OR UPDATE ON CMS_CONTENT_ENROLLMENT
FOR EACH ROW
DECLARE
BEGIN
IF :NEW.deleted = 0 AND ((inserting) OR :OLD.deleted != 0) THEN
UPDATE CMS_CONTENT CNT
SET CNT.REGISTRATIONCOUNT = CNT.REGISTRATIONCOUNT + 1
WHERE cnt.contentid = :NEW.contentid;
ELSIF :NEW.deleted != 0 AND :OLD.deleted = 0 THEN
UPDATE CMS_CONTENT CNT
SET CNT.REGISTRATIONCOUNT = CNT.REGISTRATIONCOUNT - 1
WHERE cnt.contentid = :NEW.contentid;
END IF;
END;
/
Your FOR EACH ROW trigger is querying the same table that it inserts/updates. Since the trigger fires for each row, the result of the query (if it were allowed) would be indeterminate depending on the order in which the rows are processed - this is illogical, so Oracle raises ORA-04091: table name is mutating, trigger/function may not see it.
In addition, I strongly echo Vincent's warning about writing trigger-based logic in a multi-user (or even multi-session) environment. Oracle will allow multiple versions of your code to run simultaneously, and they will not see each other's changes until they commit. Your trigger will have already "counted" the rows that it could see at the time, which potentially will mean both sessions will get it wrong.
The only way around this is (a) architect your application so that it doesn't need to store the count anywhere, instead query it on the fly when needed; or (b) introduce some form of serialization to stop multiple processes trying to do inserts/updates on the table at the same time.
I strongly recommend option (a). Option (b) is very easy to get wrong unless you first gain a very good understanding of how Oracle works.

Need suggestion on Job run in Oracle 10g

I have a job running in ORacle 10g production DB which syncs two DB tables A and B. The job fetches data from table A and inserts into table B. The job runs on a daily basis and for the past few months it started failing in production with the error "Error in retrieving data -54". On checking the store procedure, I could see that the job fails due to locking record issue when other jobs lock the records from table A and our job is not able to process the same. So I started searching for some possible solutions which I have given below.
Change the running time of the job so that it can process records. But this is not gonna help since table A is very critical and always used by production jobs. Also it has real time updates from the users.
Instead of "No WAIT" use "SKIP LOCKED" so that job will skip the locked records and run fine. But problem here is if locked records(This is always negligible compared to the huge production data) are skipped, there will be mismatch in the data in tables A and B for the day. Next day run will clear this problem since the job picks unpicked records of previous days also.But the slight mismatch for the job failed days may cause small problems
Let the job wait till all the records are unlocked and processed. but this again causes problem since we cannot predict how long job will be in waiting state(Long running state).
As of now one possible solution for me is to go with option 2 and ignore the slight deviation between table A and Bs data. Is there any other way in Oracle 10g Db to run the job without failing and long running and process all records. I wish to get some technical guidance on the same.
Thanks
PB
I'd handle the exception (note, you'll have to either initialise your own EXCEPTION or handle OTHERS and inspect the SQLCODE) and track the ids of the rows that were skipped. That way you can retry them once all the available records have been processed.
Something like this:
DECLARE
row_is_locked EXCEPTION;
PRAGMA EXCEPTION_INIT(row_is_locked, -54);
TYPE t_id_type IS VARRAY(1) OF INTEGER;
l_locked_ids t_id_type := t_id_type();
l_row test_table_a%ROWTYPE;
BEGIN
FOR i IN (
SELECT a.id
FROM test_table_a a
)
LOOP
BEGIN
-- Simulating your processing that requires locks
SELECT *
INTO l_row
FROM test_table_a a
WHERE a.id = i.id
FOR UPDATE NOWAIT;
INSERT INTO test_table_b
VALUES l_row;
-- This is on the basis that you're commiting
-- to release the lock on each row after you've
-- processed it; may not be necessary in your case
COMMIT;
EXCEPTION
WHEN row_is_locked THEN
l_locked_ids(l_locked_ids.LAST) := i.id;
l_locked_ids.EXTEND();
END;
END LOOP;
IF l_locked_ids.COUNT > 0 THEN
FOR i IN l_locked_ids.FIRST .. l_locked_ids.LAST LOOP
-- Reconcile the remaining ids here
NULL;
END LOOP;
END IF;
END;

Resources