Query regarding tuning oracle trigger - oracle

I have query regarding performance of OF,IF and WHEN clause in oracle trigger.Consider we have below trigger-
CREATE OR REPLACE TRIGGER WeightChange
AFTER UPDATE ON Person
FOR EACH ROW
BEGIN
IF :new.Weight > 250 AND new:Weight > old:Weight THEN
LogWeightChange(:new.PersonId, :new.Weight, :old.Weight);
END IF;
END WeightChange;
Now if I do following changes,
like adding OF WEIGHT
-WHEN( :new.Weight > 250 AND new:Weight > old:Weight )
return Integer
Will any or all of above add to significant improvement of the trigger?

The WHEN (condition) and OF column trigger features can significantly improve trigger performance. Much of the performance penalty of triggers is the SQL and PL/SQL context switching, and by moving more logic into SQL those switches are avoided.
For example, let's start with a simple schema:
--Sample schema with 100K simple PERSON rows.
create table person
(
id number not null primary key,
name varchar2(100),
weight number
);
insert into person
select level, level, 100 from dual connect by level <= 100000;
commit;
Enable or disable different trigger features:
--Create trigger that fires for all rows.
CREATE OR REPLACE TRIGGER WeightChange
AFTER UPDATE ON Person
FOR EACH ROW
BEGIN
IF :new.Weight > 250 AND :new.Weight > :old.Weight THEN
null;
END IF;
END WeightChange;
/
--Create trigger that only fires for relevant rows.
CREATE OR REPLACE TRIGGER WeightChange
AFTER UPDATE ON Person
FOR EACH ROW
WHEN (new.Weight > 250 AND new.Weight > old.Weight)
BEGIN
null;
END WeightChange;
/
--Create trigger that fires for all updates of WEIGHT.
CREATE OR REPLACE TRIGGER WeightChange
AFTER UPDATE OF weight ON person
FOR EACH ROW
WHEN (new.Weight > 250 AND new.Weight > old.Weight)
BEGIN
null;
END WeightChange;
/
--No trigger.
drop trigger WeightChange;
The WHEN (condition) and OF column features can make UPDATE statements run almost twice as fast in the best case.
--With no trigger - FAST:
--0.749, 0.670, 0.733
update person set weight = 100;
rollback;
--With normal trigger - SLOW:
--1.295, 1.279, 1.264 seconds
update person set weight = 100;
rollback;
--With WHEN condition trigger - FAST:
--0.687, 0.686, 0.687
update person set weight = 100;
rollback;
--With WHEN condition, using a value that satisfies conditions - SLOW:
--1.233, 1.232, 1.233
update person set weight = 500;
rollback;
--With normal trigger, update irrelevant column - SLOW:
--1.263, 1.248, 1.248
update person set name = name;
rollback;
--With OF column trigger, update irrelevant column - FAST:
--0.624, 0.624, 0.609
update person set name = name;
rollback;

Related

Update after calculate for each record ORACLE

SELECT CIF_ID,
SUM (IN_VERIFIED_DEBT + IN_FAC_WITH_OTHER + IN_FAC_WITH_BANK)
from LOS_CIF_INDV
WHERE STATUS= 'ACTIVE'
GROUP By CIF_ID;
I want to update the total column again after the user manipulates the client as update, insert but it gives an error
ORA-04098: trigger 'RLOS138.UPDATE_IN_TOTAL_COMMIT' is invalid and failed re-validation
CREATE OR REPLACE TRIGGER UPDATE_IN_TOTAL_COMMIT
AFTER UPDATE ON
LOS_CIF_INDV
FOR EACH ROW
DECLARE
inactive_id number;
BEGIN
inactive_id:=
:new.IN_VERIFIED_DEBT + :new.IN_FAC_WITH_OTHER + :new.IN_FAC_WITH_BANK;
UPDAte LOS_CIF_INDV
SET IN_TOTAL_COMMIT = inactive_id
WHERE CIF_ID = :NEW.CIF_ID;
END ;
/
I have tried this again
CREATE OR REPLACE TRIGGER RLOS138.UPDATE_IN_TOTAL_COMMIT
AFTER UPDATE ON RLOS138.LOS_CIF_INDV
FOR EACH ROW
DECLARE
inactive_id number;
BEGIN
SELECT SUM (IN_VERIFIED_DEBT+IN_FAC_WITH_OTHER+IN_FAC_WITH_BANK)
into inactive_id
from LOS_CIF_INDV
WHERE STATUS= 'ACTIVE'
and CIF_ID=:NEw.CIF_ID;
update LOS_CIF_INDV
set IN_TOTAL_COMMIT = inactive_id
where CIF_ID = :NEW.CIF_ID;
END ;
/
yes [CIF_ID] is primary key
In which case this trigger has the logic you need:
CREATE OR REPLACE TRIGGER RLOS138.UPDATE_IN_TOTAL_COMMIT
BEFORE UPDATE ON RLOS138.LOS_CIF_INDV
FOR EACH ROW
BEGIN
if :new.status = 'ACTIVE'
then
:new.IN_TOTAL_COMMIT := :new.IN_VERIFIED_DEBT + :new.IN_FAC_WITH_OTHER + :new.IN_FAC_WITH_BANK;
end if;
END ;
/
I have included the check on status because you used it in your aggregation queries, even though you omitted from the first version of the trigger. I haven't included an ELSE branch, but you may wish to add one. Also, I have assumed that the three columns in the addition are guaranteed to be not null; if that's not the case you'll need to handle that.
I have put a working demo on db<>fiddle. This includes a version of the trigger which fires on inserts as well as updates, and handles null values too....
CREATE OR REPLACE TRIGGER UPDATE_IN_TOTAL_COMMIT
-- handle INSERT as well as UPDATE
BEFORE INSERT OR UPDATE ON LOS_CIF_INDV
FOR EACH ROW
BEGIN
if :new.status = 'ACTIVE'
then
-- handle any of these columns being null
:new.IN_TOTAL_COMMIT := nvl(:new.IN_VERIFIED_DEBT,0)
+ nvl(:new.IN_FAC_WITH_OTHER,0)
+ nvl(:new.IN_FAC_WITH_BANK,0);
end if;
END ;
/
Why not after you could explain it to me
Because Oracle have written triggers that way: the AFTER EACH ROW trigger uses the finalised version of the record, the state which will be written to the database. Consequently, if we want to change any values we need to use a BEFORE EACH ROW trigger. Oracle enforces this with the error you got, ORA-04084: cannot change NEW values for this trigger type.
Just a reminder: ORA-04098 is telling you there are compilation errors in your trigger code. If you're not using an IDE which tells you what these errors are you can find them with this query:
select * from all_errors
where owner = 'RLOS138'
and name = 'UPDATE_IN_TOTAL_COMMIT' ;
(Not sure if you're connecting as RLOS138 - if you are, query USER_ERRORS instead.)
If I understood correctly, You want to update all the records having CIF_ID as an updated record with the same value in the IN_TOTAL_COMMIT column.
This is not a good idea. If you have some derived column then you should use the views instead of updating its value for every insert/update using the trigger.
If you really want to update the column then you must use the combination of Row level trigger, Statement trigger, and package variables. (Search for mutating table error in the SO)
But according to me, the best solution is to use the view, something like follows:
CREATE OR REPLACE VIEW LOS_CIF_INDV_VW AS
SELECT L.*,
COALESCE(
SUM(
CASE
WHEN STATUS = 'ACTIVE' THEN
IN_VERIFIED_DEBT + IN_FAC_WITH_OTHER + IN_FAC_WITH_BANK
END
) OVER(
PARTITION BY L.CIF_ID
),
0
) AS IN_TOTAL_COMMIT
FROM LOS_CIF_INDV L;

Oracle Trigger Performance is to slow

Given the following trigger:
CREATE OR REPLACE TRIGGER TR_MY_TRG_NAME
AFTER UPDATE OF COL_A, COL_B, COL_C ON T_MY_TABLE_Y
FOR EACH ROW
BEGIN
UPDATE T_MY_TABLE_X X
SET
X.COL_A = :NEW.COL_A,
X.COL_B = :NEW.COL_B,
X.COL_C = :NEW.COL_C
WHERE
X.ID = :NEW.ID;
END;
... and given 2 million existing records in T_MY_TABLE_Y.
Problem:
if my app is changing all of the 2 mio records (e.g. COL_A), then without the trigger it runs 2-3 minutes, but with the trigger it took 40min.
Question:
are there some alternative approaches that I could try?
An alternative is to update T_MY_TABLE_X in a single statement, without forcing a trigger to fire for each of 2 million rows and (probably) perform context switching.
So: as you update T_MY_TABLE_Y, reuse the same UPDATE for T_MY_TABLE_X (with some modifications, if necessary).
I think you might benefit to decrease the load for DML by splitting into three parts so as to update for each individual columns :
CREATE OR REPLACE TRIGGER TR_MY_TRG_NAME
AFTER UPDATE OF COL_A, COL_B, COL_C ON T_MY_TABLE_Y
FOR EACH ROW
BEGIN
IF :NEW.COL_A != :OLD.COL_A THEN
UPDATE T_MY_TABLE_X SET COL_A = :NEW.COL_A WHERE ID = :NEW.ID;
END IF;
IF :NEW.COL_B != :OLD.COL_B THEN
UPDATE T_MY_TABLE_X SET COL_B = :NEW.COL_B WHERE ID = :NEW.ID;
END IF;
IF :NEW.COL_C != :OLD.COL_C THEN
UPDATE T_MY_TABLE_X SET COL_C = :NEW.COL_C WHERE ID = :NEW.ID;
END IF;
END;
This way, Update might not incur every column for each occurence.
Moreover, be sure T_MY_TABLE_X.ID has index on it, preferably Unique Index.
I don't have the time to write the code out, but an outline approach I can suggest trying is to create a package with a ARRAY of a RECORD of ID, COL_A, COL_B and COL_C.
The BEFORE statement trigger should instantiate and initialise the package and the array within, e.g.:
the_package.pr_init
The ROW LEVEL trigger should simply write the :NEW.ID, :NEW.COL_A, :NEW.COL_B, :NEW.COL_C to the array within the package, e.g:
the_package.pr_save ( :NEW.ID, :NEW.COL_A, :NEW.COL_B, :NEW.COL_C )
The AFTER statement trigger should then issue a BULK UPDATE on the driving off the array within the package, and clear out the ARRAY, eg:
the_package.pr_do_update.
The benefit of this approach is you only execute ONE additional UPDATE statement regardless of how many rows.
The solution is also contained with in a single package, albeit splayed across 3 triggers, though the trigger code itself will be much simplified.

Oracle Trigger Conditional Rows

I'am just exploring Trigger in Oracle.
I have table like this
Note : WT_ID Id is FK of Water Summary but not have constraint(not directly connected)
I want to make trigger in Temp_tank table, if there are update in Table Temp_tank, it will sum all temp_tank volume with same WT_ID then updated it to Water_summary.Water_Use. Because of bussiness requirement, not all water_summary data will update. in this example only Home A will be affected
This is MyCode
CREATE OR REPLACE TRIGGER UPD_WaterUse
AFTER UPDATE ON Temp_tank
DECLARE
temp_wat number;
homeA_id= 1;
BEGIN
IF (WT_ID = homeA_id) THEN
SELECT SUM(ss.Volume) INTO temp_wat
from Temp_tank ss WHERE ss.Daytime = DAYTIME and ss.WT_ID =homeA_id;
-- functionUpdate(homeA_id,Daytime,temp_wat) ;
ELSE
NULL;
END IF;
END;
/
The question is, in line
IF (WT_ID = homeA_id) THEN
when i compiled, the line is ignored because WT_ID is not identifier.
is trigger cannot accept this style of code?

Sequential vs parallel solution

I will try to present my problem as simplified as possible.
Assume that we have 3 tables in Oracle 11g.
Persons (person_id, name, surname, status, etc )
Actions (action_id, person_id, action_value, action_date, calculated_flag)
Calculations (calculation_id, person_id,computed_value,computed_date)
What I want is for each person that meets certain criteria (let's say status=3)
I should get the sum of action_values from the Actions table where calculated_flag=0. (something like this select sum(action_value) from Actions where calculated_flag=0 and person_id=current_id).
Then I shall use that sum in a some kind of formula and update the Calculations table for that specific person_id.
update Calculations set computed_value=newvalue, computed_date=sysdate
where person_id=current_id
After that calculated_flag for participated rows will be set to 1.
update Actions set calculated_flag=1
where calculated_flag=0 and person_id=current_id
Now this can be easily done sequentially, by creating a cursor that will run through Persons table and then execute each action needed for the specific person.
(I don't provide the code for the sequential solution as the above is just an example that resembles my real-world setup.)
The problem is that we are talking about quite big amount of data and sequential approach seems like a waste of computational time.
It seems to me that this task could be performed in parallel for number of person_ids.
So the question is:
Can this kind of task be performed using parallelization in PL/SQL?
What would the solution look like? That is, what special packages (e.g. DBMS_PARALLEL_EXECUTE), keywords (e.g. bulk collect), methods should be used and in what manner?
Also, should I have any concerns about partial failure of parallel updates?
Note that I am not quite familiar with parallel programming with PL/SQL.
Thanks.
Edit 1.
Here my pseudo code for my sequential solution
procedure sequential_solution is
cursor persons_of_interest is
select person_id from persons
where status = 3;
tempvalue number;
newvalue number;
begin
for person in persons_of_interest
loop
begin
savepoint personsp;
--step 1
select sum(action_value) into tempvalue
from actions
where calculated_flag = 0
and person_id = person.person_id;
newvalue := dosomemorecalculations(tempvalue);
--step 2
update calculations set computed_value = newvalue, computed_date = sysdate
where person_id = person.person_id;
--step 3
update actions set calculated_flag = 1;
where calculated_flag = 0 and person_id = person.person_id;
--step 4 (didn't mention this step before - sorry)
insert into actions
( person_id, action_value, action_date, calculated_flag )
values
( person.person_id, 100, sysdate, 0 );
exception
when others then
rollback to personsp;
-- this call is defined with pragma AUTONOMOUS_TRANSACTION:
log_failure(person_id);
end;
end loop;
end;
Now, how would I speed up the above either with forall and bulk colletct or with parallel programming Under the following constrains:
proper memory management (taking into consideration large amount of data)
For a single person if one part of the step sequence fails - all steps should be rolled back and the failure logged.
I can propose the following. Let's say you have 1 000 000 rows in persons table, and you want to process 10 000 persons per iteration. So you can do it in this way:
declare
id_from persons.person_id%type;
id_to persons.person_id%type;
calc_date date := sysdate;
begin
for i in 1 .. 100 loop
id_from := (i - 1) * 10000;
id_to := i * 10000;
-- Updating Calculations table, errors are logged into err$_calculations table
merge into Calculations c
using (select p.person_id, sum(action_value) newvalue
from Actions a join persons p on p.person_id = a.person_id
where a.calculated_flag = 0
and p.status = 3
and p.person_id between id_from and id_to
group by p.person_id) s
on (s.person_id = c.person_id)
when matched then update
set c.computed_value = s.newvalue,
c.computed_date = calc_date
log errors into err$_calculations reject limit unlimited;
-- updating actions table only for those person_id which had no errors:
merge into actions a
using (select distinct p.person_id
from persons p join Calculations c on p.person_id = c.person_id
where c.computed_date = calc_date
and p.person_id between id_from and id_to)
on (c.person_id = p.person_id)
when matched then update
set a.calculated_flag = 1;
-- inserting list of persons for who calculations were successful
insert into actions (person_id, action_value, action_date, calculated_flag)
select distinct p.person_id, 100, calc_date, 0
from persons p join Calculations c on p.person_id = c.person_id
where c.computed_date = calc_date
and p.person_id between id_from and id_to;
commit;
end loop;
end;
How it works:
You split the data in persons table into chunks about 10000 rows (depends on gaps in numbers of ID's, max value of i * 10000 should be knowingly more than maximal person_id)
You make a calculation in the MERGE statement and update the Calculations table
LOG ERRORS clause prevents exceptions. If an error occurs, the row with the error will not be updated, but it will be inserted into a table for errors logging. The execution will not be interrupted. To create this table, execute:
begin
DBMS_ERRLOG.CREATE_ERROR_LOG('CALCULATIONS');
end;
The table err$_calculations will be created. More information about DBMS_ERRLOG package see in the documentation.
The second MERGE statement sets calculated_flag = 1 only for rows, where no errors occured. INSERT statement inserts the these rows into actions table. These rows could be found just with the select from Calculations table.
Also, I added variables id_from and id_to to calculate ID's range to update, and the variable calc_date to make sure that all rows updated in first MERGE statement could be found later by date.

PL/SQL Update Trigger Updates All Rows

New to working with PL/SQL and trying to create a statement level trigger that will change the 'Reorder' value to 'Yes' when the product quantity (p_qoh) is either less than 10 or less than two times the product minimum (p_min). And if that's not the case, then to change the 'Reorder' value to 'No'. My problem is that when I perform an update for a specific product, it changes the reorder value of all rows instead of the one I'm specifying. Can't seem to figure out where I'm going wrong, think I've been staring at it too long, any help is greatly appreciated.
CREATE OR REPLACE TRIGGER TRG_AlterProd
AFTER INSERT OR UPDATE OF p_qoh, p_min ON product
DECLARE
v_p_min product.p_min%type;
v_p_qoh product.p_qoh%type;
CURSOR v_cursor IS SELECT p_min, p_qoh FROM product;
BEGIN
OPEN v_cursor;
LOOP
FETCH v_cursor INTO v_p_min, v_p_qoh;
EXIT WHEN v_cursor%NOTFOUND;
IF v_p_qoh < (v_p_min * 2) OR v_p_qoh < 10 THEN
UPDATE product SET p_reorder = 'Yes';
ELSE
UPDATE product SET p_reorder = 'No';
END IF;
END LOOP;
END;
/
The update command :
UPDATE product SET p_reorder = 'Yes';
updates all of your rows because you are not specifying a WHERE clause.
What you can do is to retrieve the product's id (product_id) using your cursor and save it so that you would use it this way:
UPDATE product SET p_reorder = 'Yes' WHERE id = product_id;
Whoaa, this is not how you do triggers.
1 - Read the Oracle Trigger Documentation
2 - (almost) Never do a commit in a trigger. That is the domain of the calling application.
3 - There is no need to select anything related to product. You already have the product record at hand with the :new and :old pseudo records. Just update the column value in :new as needed. Example below (not checked for syntax errors, etc.);
CREATE OR REPLACE TRIGGER TRG_AlterProd
BEFORE INSERT OR UPDATE OF p_qoh, p_min ON product
FOR EACH ROW
BEGIN
IF :new.p_qoh < (:new.p_min * 2) OR :new.p_qoh < 10 THEN
:new.p_reorder = 'Yes';
ELSE
:new p_reorder = 'No';
END IF;
END;
#StevieP, If you need to commit inside a trigger, you may want to consider doing it as Autonomous Transaction.
Also, sorry if my understanding of your problem statement is wrong, but your it sounded to me like a row level trigger - are you only updating the current row or are you scanning the entire table to change status on several rows? If it's on current row, #OldProgrammer's solution seems right.
And I am just curious, if you do an UPDATE statement inside the trigger on the same table, wouldn't it generate (recursive) trigger(s)? I haven't done statement triggers like this, so sorry if this is not the expected trigger behavior.
To me a statement trigger would make more sense, if the trigger was on say, sales table, when a product is sold (inserted into sales table), it will trigger the corresponding product id records to be updated (to REORDER) in Product table. That will prevent recursion danger also.

Resources