2 triggers launching sametime - oracle

I have two tables
user_salary
-------------------------
| user_id | salary_p |
-------------------------
| 1 | 100 |
| 2 | 200 |
-------------------------
user_p_salary
------------------------
| user_id | salary_c |
-------------------------
| 1 | 100 |
| 2 | 200 |
-------------------------
user_salary is used via UI, and it has following trigger:
create or replace trigger t$user_salary_aiu
after insert or update of salary_p
on user_salary
for each row
begin
update user_p_salary t
set t.salary_c = :new.salary_p,
where t.user_id = :new.user_id
end t$user_salary_aiu;
user_p_salary gets data via integration and it has following code:
create or replace trigger t$user_p_salary_aiu
after insert or update of salary_c
on user_p_salary
for each row
begin
update user_salary t
set t.salary_p = :new.salary_c,
where t.user_id = :new.user_id
end t$user_p_salary_aiu;
Now the problem is that if one of the table gets the data then it executes its trigger and updates data in another table. However, a trigger on another table executes as well..which is like cycle of trigger.
The only way is to use execute immediate 'alter triggername disable' but this doesn't seem to be working in triggers at all. Any ideas?
Thanks in advance :-)

The ORA-04091 error is exactly what should happen here. In simple terms, you can't do what you're trying to do. Think about it - you update table #1, then the trigger on table #1 updates table #2, whose trigger updates table #1, whose trigger updates table #2, over and over and over. It's a trigger loop, and Oracle doesn't allow that to happen. The rule is that a row trigger cannot access a table which has already been changed (or "mutated") in the same transaction. There are techniques (notably, compound triggers) which let you "work around" this, but the best approach would be to re-work the design to eliminate this issue. Sorry to be the bearer of bad news. Best of luck.

Why not only update the salary if it is not equal to the new value e.g.
create or replace trigger t$user_salary_aiu
after insert or update of salary_p
on user_salary
for each row
begin
update user_p_salary t
set t.salary_c = :new.salary_p
where t.user_id = :new.user_id
and ( t.salary_c <> :new.salary_p
or (t.salary_c is null and :new.salary_p is not null)
or (t.salary_c is not null and :new.salary_p is null) );
end t$user_salary_aiu;
create or replace trigger t$user_p_salary_aiu
after insert or update of salary_c
on user_p_salary
for each row
begin
update user_salary t
set t.salary_p = :new.salary_c
where t.user_id = :new.user_id
and ( t.salary_p <> :new.salary_c;
or (t.salary_p is null and :new.salary_c is not null)
or (t.salary_p is not null and :new.salary_c is null) );
end t$user_p_salary_aiu;
Note: Despite the wording of the documentation the dml_event_clause update of column appears to mean the trigger will fire if the column is included in triggering UPDATE statement i.e. if the column is updated, even if it is updated to the same value that it was.

How about you add a column to each table called BY_TRIGGER.
For an update or insert outside your trigger, you simply do not specify this column. However, when updating or inserting from within your trigger, you pass a value of 1.
Also, in each trigegr, you check if :new.BY_TRIGGER is 1, and if it is, you skip the insert/update to the other table.

Related

View Performance

I have requirement of performing some calculation on a column of a table with large date set ( 300 GB). and return that value.
Basically I need to create a View on that table. Table has data of 21 years and It is partitioned on date column (Daily). We can not put date condition on View's query and User will put filter on runtime while execution of the view.
For example:
Create view v_view as
select * from table;
Noe I want to query View like
Select * v_view where ts_date between '1-Jan-19' and '1-Jan-20'
How Internally Oracle execute above statement? Will it execute view query first and then put date filter on that?
If so will there not be performance issue ? and how to resolve this?
oracle first generates the view and then applies the filter. you can create a function that input may inserted by user. the function results a create query and if yo run the query then the view will be created. just run:
create or replace function fnc_x(where_condition in varchar2)
return varchar2
as
begin
return ' CREATE OR REPLACE VIEW sup_orders AS
SELECT suppliers.supplier_id, orders.quantity, orders.price
FROM suppliers
INNER JOIN orders
ON suppliers.supplier_id = orders.supplier_id
'||where_condition||' ';
end fnc_x;
this function should be run. input the function is a string like this:
''WHERE suppliers.supplier_name = Microsoft''
then you should run a block like this to run the function's result:
cl scr
set SERVEROUTPUT ON
declare
szSql varchar2(3000);
crte_vw varchar2(3000);
begin
szSql := 'select fnc_x(''WHERE suppliers.supplier_name = Microsoft'') from dual';
dbms_output.put_line(szSql);
execute immediate szSql into crte_vw; -- generate 'create view' command that is depended on user's where_condition
dbms_output.put_line(crte_vw);
execute immediate crte_vw ; -- create the view
end;
In this manner, you just need received where_condition from user.
Oracle can "push" the predicates inside simple views and can then use those predicates to enable partition pruning for optimal performance. You almost never need to worry about what Oracle will run first - it will figure out the optimal order for you. Oracle does not need to mindlessly build the first step of a query, and then send all of the results to the second step. The below sample schema and queries demonstrate how only the minimal amount of partitions are used when a view on a partitioned table is queried.
--drop table table1;
--Create a daily-partitioned table.
create table table1(id number, ts_date date)
partition by range(ts_date)
interval (numtodsinterval(1, 'day'))
(
partition p1 values less than (date '2000-01-01')
);
--Insert 1000 values, each in a separate day and partition.
insert into table1
select level, date '2000-01-01' + level
from dual
connect by level <= 1000;
--Create a simple view on the partitioned table.
create or replace view v_view as select * from table1;
The following explain plan shows "Pstart" and "Pstop" set to 3 and 4, which means that only 2 of the many partitions are used for this query.
--Generate an explain plan for a simple query on the view.
explain plan for
select * from v_view where ts_date between date '2000-01-02' and date '2000-01-03';
--Show the explain plan.
select * from table(dbms_xplan.display(format => 'basic +partition'));
Plan hash value: 434062308
-----------------------------------------------------------
| Id | Operation | Name | Pstart| Pstop |
-----------------------------------------------------------
| 0 | SELECT STATEMENT | | | |
| 1 | PARTITION RANGE ITERATOR| | 3 | 4 |
| 2 | TABLE ACCESS FULL | TABLE1 | 3 | 4 |
-----------------------------------------------------------
However, partition pruning and predicate pushing do not always work when we may think they should. One thing we can do to help the optimizer is to use date literals instead of strings that look like dates. For example, replace
'1-Jan-19' with date '2019-01-01'. When we use ANSI date literals, there is no ambiguity and Oracle is more likely to use partition pruning.

Validation checks in PL/SQL

As per my project requirement, I need to store all validation check queries in one table and validate all records of another table and update each record with its validation status.
For example, I have two tables called EMP and VALIDATIONS
Validation table has two columns as below:
------------------- --------------
Validation_desc Validation_sql
------------------ --------------
EID_IS_NULL related SQL should be here
SAL_HIGH related SQL should be here
EMPtable has normal columns like eid,ename,sal,dept,is_valid,val_desc.
I should write PL/SQL code which will fetch all validation sql's from VALIDATIONS table and check each record of EMP table and validate them. If first record got success with all validations which are available in VALIDATIONS table then EMP table IS_VALID column should be updated with 1 and Validation_desc should be null for that particular record. If second record got failed with 2 checks then that record's IS_VALID column should be updated with 0 and Validation_desc should be updated with those Validation_descwith comma separated, like wise it should check all validations for all records of EMP table.
I have tried below code to fetch all details from both the tables but not able to write logic for validations.
CREATE PROCEDURE P_VALIDATION
as
TYPE REC_TYPE IS RECORD( Validation_desc VARCHAR2(4000),
Validation_sql VARCHAR2(4000));
TYPE VAL_CHECK_TYPE IS TABLE OF REC_TYPE;
LV_VAL_CHECK VAL_CHECK_TYPE;
CURSOR CUR_FEED_DATA IS SELECT * FROM EMP;
LV_FEED_DATA EMP%ROWTYPE;
BEGIN
SELECT Validation_desc, Validation_sql
BULK COLLECT INTO LV_VAL_CHECK FROM VALIDATIONS;
OPEN CUR_FEED_DATA;
LOOP
FETCH CUR_FEED_DATA INTO LV_FEED_DATA;
EXIT WHEN CUR_FEED_DATA%NOTFOUND;
FOR I IN LV_VAL_CHECK.FIRST .. LV_VAL_CHECK.LAST LOOP
----SOME VALIDATIONS LOGIC HERE--
END LOOP;
END LOOP;
CLOSE CUR_FEED_DATA;
END;
There is no single type of validation. Validations can be is null, is not null, is true/false, returns rows/no rows etc. There are a couple of ways to tackle this
write your own assertion package with a procedure for each type of validation. In your table you'd store the type of validation and the expression to be evaluated. This is quite a bit of work.
leverage an open source testing framework like utPLSQL and modify that a bit to suit your needs. All kind of validations have already been implemented there. If you decide to go this route, note that there are major differences between version 2 and 3
First off pay attention to #KoenLostrie opening sentence: There is no single type of validation. This basically says no single process will solve the problem. But using the database's built in validations should be your first line of attack. Both of the example validation simply cease to be necessary with simple predefined constraints:
Use a constraint to define ... — a rule that
restricts the values in a database. Oracle Database lets you create
six types of constraints ...
For the table you outlined constraints can handle validations you outlined and a couple extras:
+-------------+--------+----------------------------------+-----------------------------------------------------+
| Validation | Column | Constraint Type | Description |
+-------------+--------+----------------------------------+-----------------------------------------------------+
| Eid_is_Null | eid | Primary Key | Guarantees eid is not null and is unique in table |
+-------------+--------+----------------------------------+-----------------------------------------------------+
| Sal_High | salary | Check (salary <= 50000) | Guarantees salary column is not greater than 50,000 |
+-------------+--------+----------------------------------+-----------------------------------------------------+
| Dept_OK | dept | not null | Ensures column is present |
+ +----------------------------------+-----------------------------------------------------+
| | | Foreign key to Departments table | Guarantees value exists in reference table |
+-------------+--------+----------------------------------+-----------------------------------------------------+
| Name_Ok | ename | not null | Ensures column is present |
+-------------+--------+----------------------------------+-----------------------------------------------------+
For the listed above, and any other constraint added, you do not need a query to validate - the database manager simply will not allow an invalid row to exist. Of course now your code needs to handle the resulting exceptions.
Unfortunately constraints cannot handle every validation so you will still need validation routines to record those. However, you should not create a comma separated list (not only do they violate 1st normal form they are always more trouble than they are worth - how do you update the list when the error is resolved?) Instead create a new table - emp_violations as:
create table emp_violations (
emp_vio_id integer generated always as identity
, emp_id integer -- or type of emp_id
, vol_id integer -- or type of vio_id (the pk of violations table)
, date_created date
, date_resolved date
, constraint emp_violations_pk
primary key (emp_violation_id)
, constraint emp_violations_2_emp_fk
foreign key (emp_id)
references emp(mp_id)
, constraint emp_violations_2_violations_fk
foreign key (vio_id)
references violations(vio_id)
);
With this it is easy to see what non-constraint violations have, or currently exist, and when they were resolved. Also remove the columns Validation_desc (no longer needed) and is_valid (derivable from unresolved emp_violations requiring no additional column maintenance).
If you absolutely must get is_valid and a comma separated list of violations then create a view with the LISTAGG function.

Best practice for updating column specific triggers

Welcome Oracle pro's
In an Oracle 12 database (upgrade is already scheduled ;-)) we have a setup of different tables updating a common base table via "after update" triggers like following:
Search_Flat
ID
Field_A
Field_B
Field_C
Now table1 contains n columns where let's say 2 out of n are relevant for the Search_Flat table. As the update of table1 may only affect columns not relevant for Seach_Flat we want to add checks to the trigger. So our first approach is like following:
CREATE OR REPLACE TRIGGER tr_tbl_1_au_search
AFTER UPDATE OF
field_a,
field_b
ON schemauser.search_flat
FOR EACH ROW
BEGIN
IF :new.field_a <> :old.field_a THEN
UPDATE schemauser.search_flat SET field_a = :new.field_a WHERE id = :new.ID;
END IF;
IF :new.field_b <> :old.field_b THEN
UPDATE schemauser.search_flat SET field_b = :new.field_b WHERE id = :new.ID;
END IF;
END;
Alternatively we could also setup the trigger like following:
CREATE OR REPLACE TRIGGER tr_tbl_1_au_search
AFTER UPDATE OF
field_a,
field_b
ON schemauser.search_flat
FOR EACH ROW
BEGIN
IF :new.field_a <> :old.field_a OR :new.field_b <> :old.field_b THEN
UPDATE schemauser.search_flat
SET field_a = :new.field_a,
field_b = :new.field_b
WHERE id = :new.ID;
END IF;
END;
The question now is about the setup of the triggers themselves. Which approach is the better with respect to:
locking time of search_flat rows
overall performance of affected components (i.e., table_1, trigger and search_flat)
In production we are talking about 4 tables with 10 fields each considered in the triggers. And we have independent app servers accessing the shared database updating the 4 tables simultaneously. From time to time we detect the following error which is the reason we wan't to optimize the triggers:
ORA-02049: timeout: distributed transaction waiting for lock
Sidenote: This setup has been chosen instead of a view or materialized view due to performance reasons as the base table is used in gui with the requirement to be instantly updated and the number of records of the 4 feeding tables are too high for updating materialized view on update.
I'm looking forward to the discussion and your thoughts.
As I understand your post, you have 4 live tables (called "table1", "table2", etc.) that you want to search on, but querying from them is too slow, so you want to maintain a single, flattened table to search on instead and have triggers to keep that flattened table always up-to-date.
You want to know which of two trigger approaches is better.
I think the answer is "neither", since both are prone to deadlocks. Imagine this scenario
User 1 -
UPDATE table1
SET field_a = 500
WHERE <condition effecting 200 distinct IDs>
User 2 at about the same time -
UPDATE table1
SET field_b = 700
WHERE <condition effecting 200 distinct IDs>
Triggers start processing. You cannot control the order in which the rows are updated. Maybe it goes like this:
User 1's trigger, time index 100 ->
UPDATE search_flat SET field_a = 500 WHERE id = 90;
User 2's trigger, time index 101 ->
UPDATE search_flat SET field_b = 700 WHERE id = 91;
User 1's trigger, time index 102 ->
UPDATE search_flat SET field_a = 500 WHERE id = 91; (waits on user 2's session)
User 2's trigger, time index 103 ->
UPDATE search_flat SET field_b = 700 WHERE id = 90; (deadlock error)
User 2's original update fails and rolls back.
You have multiple concurrent processes all updating the same set of rows in search_flat with no control over the processing order. That is a recipe for deadlocks.
If you wanted to do this safely, you should consider neither of the FOR EACH ROW trigger approaches you outlines. Rather, make a compound trigger to do this.
Here's some sample code to illustrate the idea. Be sure to read the comments.
-- Aside: consider setting this at the system level if on 12.2 or later
-- alter system set temp_undo_enabled=false;
CREATE GLOBAL TEMPORARY TABLE table1_updates_gtt (
id NUMBER,
field_a VARCHAR2(80),
field_b VARCHAR2(80)
) ON COMMIT DELETE ROWS;
CREATE GLOBAL TEMPORARY TABLE table2_updates_gtt (
id NUMBER,
field_a VARCHAR2(80)
) ON COMMIT DELETE ROWS;
-- .. so on for table3 and 4.
CREATE OR REPLACE TRIGGER table1_search_maint_trg
FOR INSERT OR UPDATE OR DELETE ON table1 -- with similar compound triggers for table2, 3, 4.
COMPOUND TRIGGER
AFTER EACH ROW IS
BEGIN
-- Update the table-1 specific GTT with the changes.
CASE WHEN INSERTING OR UPDATING THEN
-- Assumes ID is immutable primary key
INSERT INTO table1_updates_gtt (id, field_a) VALUES (:new.id, :new.field_a);
WHEN DELETING THEN
INSERT INTO table1_updates_gtt (id, field_a) VALUES (:old.id, null); -- or figure out what you want to do about deletes.
END CASE;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
-- Write the data from the GTT to the search_flat table.
-- NOTE: The ORDER BY in the next line is what saves us from deadlocks.
FOR r IN ( SELECT id, field_a, field_b FROM table1_updates_gtt ORDER BY id ) LOOP
-- TODO: replace with BULK processing for better performance, if DMLs can affect a lot of rows
UPDATE search_flat sf
SET sf.field_a = r.field_a,
sf.field_b = r.field_b
WHERE sf.id = r.id
AND ( sf.field_a <> r.field_a
OR (sf.field_a IS NULL AND r.field_a IS NOT NULL)
OR (sf.field_a IS NOT NULL AND r.field_a IS NULL)
OR sf.field_b <> r.field_b
OR (sf.field_b IS NULL AND r.field_b IS NOT NULL)
OR (sf.field_b IS NOT NULL AND r.field_b IS NULL)
);
END LOOP;
END AFTER STATEMENT;
END table1_search_maint_trg;
Also, as numerous commenters have pointed out, it's probably better to use a materialized view for this. If you are on 12.2 or later, real-time materialized views (aka "ENABLE ON QUERY COMPUTATION") offer a lot of promise for this sort of thing. No COMMIT overhead to your application and real-time search results. It's just that search time degrades slightly if there are a lot of recent updates to the underlying tables.

Inserting Row Number based on existing value in the table

I have a requirement that I need to insert row number in a table based on value already present in the table. For example, the max row_nbr record in the current table is something like this:
+----------+----------+------------+---------+
| FST_NAME | LST_NAME | STATE_CODE | ROW_NBR |
+----------+----------+------------+---------+
| John | Doe | 13 | 123 |
+----------+----------+------------+---------+
Now, I need to insert more records, with given FST_NAME and LST_NAME values. ROW_NBR needs to be generated while inserting the data into table with values auto-incrementing from 123.
I can't use a sequence, as my loading process is not the only process that inserts data into this table. And I can't use a cursor as well, as due to high volume of data the TEMP space gets filled up quickly. And I'm inserting data as given below:
insert into final_table
( fst_name,lst_name,state_code)
(select * from staging_table
where state_code=13);
Any ideas how to implement this?
It sounds like other processes are finding the current maximum row_nbr value and incrementing it as they do single-row inserts in a cursor loop.
You could do something functionally similar, either finding the maximum in advance and incrementing it (if you're already running this in a PL/SQL block):
insert into final_table (fst_name, lst_name, state_code, row_nbr)
select st.*, variable_holding_maximum + rownum
from staging_table st
where st.state_code=13;
or by querying the table as part of the query, which doesn't need PL/SQL:
insert into final_table (fst_name, lst_name, state_code, row_nbr)
select st.*, (select max(row_nbr) from final_table) + rownum
from staging_table st
where st.state_code=13;
db<>fiddle
But this isn't a good solution because it doesn't prevent clashes from different processes and sessions trying to insert at the same time; but neither would the cursor loop approach, unless it is catching unique constraint errors and re-attempting with a new value, perhaps.
It would be better to use a sequence, which would be an auto-increment column but you said you can't change the table structure; and you need to let the other processes continue to work without modification. You can still do that with a sequence and trigger approach, having the trigger always set the row_nbr value form the sequence, regardless of whether the insert statement supplied a value.
If you create a sequence that starts from the current maximum, with something like:
create sequence final_seq start with <current max + 1>
or without manually finding it:
declare
start_with pls_integer;
begin
select nvl(max(row_nbr), 0) + 1 into start_with from final_table;
execute immediate 'create sequence final_seq start with ' || start_with;
end;
/
then your trigger could just be:
create trigger final_trig
before insert on final_table
for each row
begin
:new.row_nbr := final_seq.nextval;
end;
/
Then your insert ... select statement doesn't need to supply or even think about the row_nbr value, so you can leave it as you have it now (except I'd avoid select * even in that construct, and list the staging table columns explicitly); and any existing inserts that do supply the row_nbr don't need to be modified and the value they supply will just be overwritten from the sequence.
db<>fiddle showing inserts with and withouth row_nbr specified.

Trying to delete a row based upon condition defined in my trigger (SQL)

I am trying to create a row level trigger to delete a row if a value in the row is being made NULL. My business parameters state that if a value is being made null, then the row must be deleted. Also, I cannot use a global variable.
BEGIN
IF :NEW.EXHIBIT_ID IS NULL THEN
DELETE SHOWING
WHERE EXHIBIT_ID = :OLD.EXHIBIT_ID;
END IF;
I get the following errors:
ORA-04091: table ISA722.SHOWING is mutating, trigger/function may not see it
ORA-06512: at "ISA722.TRG_EXPAINT", line 7
ORA-04088: error during execution of trigger 'ISA722.TRG_EXPAINT'
When executing this query:
UPDATE SHOWING
SET EXHIBIT_ID = NULL
WHERE PAINT_ID = 5104
As already indicated this is a terrible idea/design. Triggers are very poor methods for enforcing business rules. These should be enforced in the application or better (IMO) by a stored procedure called by the application. In this case not only is it a bad idea, but it cannot be implemented as desired. Within a trigger Oracle does not permit accessing the table the trigger fired was fired on. That is what mutating indicates. Think of trying to debug this or resolve a problem a week later. Nevertheless this non-sense can be accomplished by creating view and processing against it instead of the table.
-- setup
create table showing (exhibit_id integer, exhibit_name varchar2(50));
create view show as select * from showing;
-- trigger on VIEW
create or replace trigger show_iiur
instead of insert or update on show
for each row
begin
merge into showing
using (select :new.exhibit_id new_eid
, :old.exhibit_id old_eid
, :new.exhibit_name new_ename
from dual
) on (exhibit_id = old_eid)
when matched then
update set exhibit_name = new_ename
delete where new_eid is null
when not matched then
insert (exhibit_id, exhibit_name)
values (:new.exhibit_id, :new.exhibit_name);
end ;
-- test data
insert into show(exhibit_id, exhibit_name)
select 1,'abc' from dual union all
select 2,'def' from dual union all
select 3,'ghi' from dual;
-- 3 rows inserted
select * from show;
--- test
update show
set exhibit_name = 'XyZ'
where exhibit_id = 3;
-- 1 row updated
-- Now for the requested action. Turn the UPDATE into a DELETE
update show
set exhibit_id = null
where exhibit_name = 'def';
-- 1 row updated
select * from show;
-- table and view are the same (expect o rows)
select * from show MINUS select * from showing
UNION ALL
select * from showing MINUS select * from show;
Again this is a bad option yet you can do. But just because you can doesn't mean you should. Or that you'll be happy with the result. Good Luck.
You have written a trigger that fires after or before a row change. This is in the middle of an execution. You cannot delete a row from the same table in that moment.
So you must write an after statement trigger instead that only fires when the whole statement has run.
create or replace trigger mytrigger
after update of exhibit_id on showing
begin
delete from showing where exhibit_id is null;
end mytrigger;
Demo: https://dbfiddle.uk/?rdbms=oracle_18&fiddle=dd5ade700d49daf14f4cdc71aed48e17
What you can do is create an extra column like is_to_be_deleted in the same table, and do this:
UPDATE SHOWING
SET EXHIBIT_ID = NULL, is_to_be_deleted = 'Y'
WHERE PAINT_ID = 5104;
You can use this parameter to implement your business logic of not showing the null details.
And later you can schedule a batch delete on that table to clean up these rows (or maybe archive it).
Benefit: you can avoid an extra unnecessary trigger on that table.
Nobody, will suggest you to use trigger to do this type of delete as it is expensive.

Resources