Delaying the trigger invocation after an insert oracle - oracle

Is there a way to do this?. I found adding,
DBMS_LOCK.sleep()
to the beginning of the trigger code by googling, but it blocks the insert itself from happening. I would like to insert the data but the trigger should be fired only after an arbitrary delay. Thanks.

It would help if we knew why you want this delay, and what the trigger is supposed to do after the delay. However, one possibility is to use the DBMS_JOB package in the trigger to create a job that runs at a time a little after the insert. For example:
create trigger trg
after insert on tab
for each row
declare
jl_ob number;
begin
dbms_job.submit
( job => l_job
, what => 'myproc(:new.id);'
, next_date => sysdate+1/24/60 -- One minute later
);
end;
Alternatively, the trigger could insert a row into a special table, and a DBMS_JOB that runs on a schedule e.g. every 10 minutes could process rows in the table that are more than X minutes old.

Related

Oracle insert procedure inserts only 1 row instead of bulk

I am attempting to grab a variable max date from a table, then use that variable to insert the records into another table that are greater than the variable max date. I have created the procedure and tested it but it only inserts 1 record each time I run the procedure as a scheduled job through dbms_scheduler to run every 30 minutes. My test case allowed for the first run to insert 6 rows, after the first job run it only inserted 1 record of the 6 records. Then the next run inserted 1 record...etc. I am testing this to ultimately be used in concept to insert append a few thousand rows every 30 minutes as a scheduled job. What is the most effective way to run this type of procedure quickly and bulk insert the rows. I was considering altering the table to nologging and dropping any indexes and rebuild them after the insert. What is the best approach, thank you in advance.
Here is my code:
create or replace procedure update_cars
AS
v_date date;
begin
execute immediate 'alter session set NLS_DATE_FORMAT='DD-MON-YY HH24:MI:SS'';
select max(inventory_date) into v_date from car_equipment;
insert /*+APPEND*/ into car_equipment(count_cars,equipment_type,location,inventory_date,count_inventory)
select count_cars,equipment_type,location,inventory_date,count_inventory
from car_source where inventory_date > v_date;
end;
Why are you altering session? What benefit do you expect from it?
Code you wrote can be "simplified" to
create or replace procedure update_cars
as
begin
insert into car_equipment (count_cars,, equipment_type, ...)
select s.count_cars, s.equipment_type, ...
from car_source s
where inventory_date > (select max(e.inventory_date) from car_equipment e);
end;
If code inserts only one row, then check date values from both car_equipment and car_source tables. Without sample data, I'd say that everything is OK with code (at least, it looks OK to me).
If you'll be inserting a few thousand rows every 30 minutes, that shouldn't be a problem as Oracle is capable of handling that easily.

How to create a trigger that wont hold the entire transaction till it complete?

There is a procedure which have insert operation in middle for a table. The table contain a trigger and because of that the entire transaction getting hold. Is there are anyway to make trigger run on separate session and after insert operation procedure runs without waiting for the trigger to complete.
Both procedure and the trigger are
PRAGMA AUTONOMOUS_TRANSACTION
You could try runnig the trigger part as dbms_job ... as follows:
CREATE OR REPLACE TRIGGER myrigger
AFTER INSERT
ON mytable
REFERENCING NEW AS New OLD AS Old
FOR EACH ROW
DECLARE
l_job number;
begin
dbms_job.submit( l_job, 'MYPACKAGE.MYFUNCTION(''' || :new.myField || ''');' );
END ;
/
If Trigger is based on insert operation, it will hold the current session, until that trigger action is completed. This is required to maintain integrity at database side. It may be possible to design a procedure & trigger in a better way if the requirements are known.

Oracle 'after create' trigger to grant privileges

I have an 'after create on database' trigger to provide select access on newly created tables within specific schemas to different Oracle roles.
If I execute a create table ... as select statement and then query the new table in the same block of code within TOAD or a different UI I encounter an error, but it works if I run the commands separately:
create table schema1.table1 as select * from schema2.table2 where rownum < 2;
select count(*) from schema1.table1;
If I execute them as one block of code I get:
ORA-01031: insufficient privileges
If I execute them individually, I don't get an error and am able to obtain the correct count.
Sample snippet of AFTER CREATE trigger
CREATE OR REPLACE TRIGGER TGR_DATABASE_AUDIT AFTER
CREATE OR DROP OR ALTER ON Database
DECLARE
vOS_User VARCHAR2(30);
vTerminal VARCHAR2(30);
vMachine VARCHAR2(30);
vSession_User VARCHAR2(30);
vSession_Id INTEGER;
l_jobno NUMBER;
BEGIN
SELECT sys_context('USERENV', 'SESSIONID'),
sys_context('USERENV', 'OS_USER'),
sys_context('USERENV', 'TERMINAL'),
sys_context('USERENV', 'HOST'),
sys_context('USERENV', 'SESSION_USER')
INTO vSession_Id,
vOS_User,
vTerminal,
vMachine,
vSession_User
FROM Dual;
insert into schema3.event_table VALUES (vSession_Id, SYSDATE,
vSession_User, vOS_User, vMachine, vTerminal, ora_sysevent,
ora_dict_obj_type,ora_dict_obj_owner,ora_dict_obj_name);
IF ora_sysevent = 'CREATE' THEN
IF (ora_dict_obj_owner = 'SCHEMA1') THEN
IF DICTIONARY_OBJ_TYPE = 'TABLE' THEN
dbms_job.submit(l_jobno,'sys.execute_app_ddl(''GRANT SELECT
ON '||ora_dict_obj_owner||'.'||ora_dict_obj_name||' TO
Role1,Role2'');');
END IF;
END IF;
END IF;
END;
Jobs are asynchronous. Your code is not.
Ignoring for the moment the fact that if you're dynamically granting privileges that something in the world is creating new tables live in production without going through a change control process (at which point a human reviewer would ensure that appropriate grants were included) which implies that you have a much bigger problem...
When you run the CREATE TABLE statement, the trigger fires and a job is scheduled to run. That job runs in a separate session and can't start until your CREATE TABLE statement issues its final implicit commit and returns control to the first session. Best case, that job runs a second or two after the CREATE TABLE statement completes. But it could be longer depending on how many background jobs are allowed to run simultaneously, what other jobs are running, how busy Oracle is, etc.
The simplest approach would be to add a dbms_lock.sleep call between the CREATE TABLE and the SELECT that waits a reasonable amount of time to give the background job time to run. That's trivial to code (and useful to validate that this is, in fact, the only problem you have) but it's not foolproof. Even if you put in a delay that's "long enough" for testing, you might encounter a longer delay in the future. The more complicated approach would be to query dba_jobs, look to see if there is a job there related to the table you just created, and sleep if there is in a loop.

Translation of DB2 Trigger to Oracle triggers ORA-04091 Error

So I have two tables: JOBS and TASKS.
The TASKS Table has a TAKS_STATUS Column that stores a progression of states (e.g. 'Submitting', 'Submitted', 'Authored', 'Completed').
The JOBS Table has a JOB_STATUS Column which is a roll-up (i.e. minimum state) of the TASK_STATUS Columns in the TASKS Table. The JOBS Table also has a TASK_COUNT value that contains the number of TASKS associated with a Job.
Jobs can have one or more tasks: JOB_ID in each table links them.
In DB2, I have a series of simple triggers to roll-up this state; here's one in the middle:
create or replace trigger JOB_AUTHORED
after update of TASK_STATUS on TASKS
referencing NEW as N
for each row
when (TASK_STATUS = 'Authored')
update JOBS
set JOB_STATUS = 'Authored'
where JOBS.JOB_ID = N.JOB_ID
and TASK_COUNT=(
select count(0) from TASKS
where TASKS.JOB_ID = N.JOB_ID
and TASKS.TASK_STATUS in ('Authored','Completed'))
This works fine in DB2 because the trigger runs in the same work unit as the triggering event and thus it can see the work unit's uncommitted changes and can count the TASK_STATUS change that just occurred without hitting a row-lock.
Here's the translated trigger in Oracle:
create or replace trigger JOB_AUTHORED
after update of TASK_STATUS on TASKS
for each row
when (NEW.TASK_STATUS = 'Authored')
BEGIN
update JOBS
set JOB_STATUS='Authored'
where JOBS.JOB_ID = :NEW.JOB_ID and TASK_COUNT=(
select count(0) from TASKS
where TASKS.JOB_ID = :NEW.JOB_ID
and TASKS.TASK_STATUS in ('Authored','Completed'));
END;
In Oracle this fails with:
ORA-04091: table MYSCHEMA.TASKS is mutating, trigger/function may not see it#012ORA-06512: at "MYSCHEMA.JOB_AUTHORED", line 1#012ORA-04088: error during execution of trigger 'MYSCHEMA.JOB_AUTHORED'#012] [query: UPDATE TASKS SET TASK_STATUS=:1 where TASK_ID=:2
Apparently Oracle's Triggers do not run in the same context, can not see the uncommitted triggering updates and thus will never be able to count the number of tasks in specific states which include the triggering row.
I guess I could change the AFTER Trigger to an INSTEAD OF Trigger and update the TASK_STATUS (as well as the JOB_STATUS) within the Trigger (so the Job Update can see the Task Update), but would I run into the same error? Maybe not the first Task Update, but what if the triggering program is updating a bunch of TASKS before committing: what happens when the second Task is updated?
I've also considered removing the Trigger and have a program scan active Jobs for the status of its Tasks, but that seems inelegant.
What is the Best Practice with something like this in Oracle?
The best practice is to avoid triggers if you can.
See these links for an answer why one shoud not use triggers:
http://www.oracle.com/technetwork/testcontent/o58asktom-101055.html
http://rwijk.blogspot.com/2007/09/database-triggers-are-evil.html
Use a procedure (an API) in place of triggers - you can create a package with a few procedures like add_new_job, add_new_task, change_task_status etc., and place all logic in these procedures (checking, changing tasks' states, changing jobs' states etc.) in one place. Easy to understand, easy to maintain, easy to debug and easy to tracking errors.
If you insist on using triggers, then you can create the compound trigger, as Tom Kyte mentioned in the frist link above as workaround, for example:
create or replace TRIGGER JOB_AUTHORED
FOR UPDATE OF TASK_STATUS on TASKS
COMPOUND TRIGGER
TYPE JOB_ID_TABLE_TYPE IS TABLE OF PLS_INTEGER INDEX BY PLS_INTEGER;
JOB_ID_TABLE JOB_ID_TABLE_TYPE;
dummy CHAR;
AFTER EACH ROW IS
BEGIN
-- SELECT null INTO dummy FROM JOBS WHERE job_id = :NEW.JOB_ID FOR UPDATE;
JOB_ID_TABLE( :NEW.JOB_ID ) := :NEW.JOB_ID;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
FORALL x IN INDICES OF JOB_ID_TABLE
UPDATE jobs set JOB_STATUS='Authored'
WHERE JOBS.JOB_ID = JOB_ID_TABLE( x )
and TASK_COUNT=(
select count(0) from TASKS
where TASKS.JOB_ID = JOBS.JOB_ID
and TASKS.TASK_STATUS in ('Authored','Completed')
);
END AFTER STATEMENT;
END JOB_AUTHORED;
but .....
I am not sure if there are no any pitfals in this example we are not aware at this time.
For example one pitfall is in this scenario:
Suposse there are 18 tasks with status Authored
At time X user A runs UPDATE TASK SET status ' Authored' WHERE
task_id = 2. The trigger is fired and sees 18+1 commited tasks with
status Authored
At time X+10ms user B runs UPDATE TASK1 SET
status ' Authored' task_id = 4. The trigger is fired and sees 18+1
commited tasks with status Authored
At time X+20ms user A
commits
At time X+30ms user B commits
At the end we have 21 tasks
with status authored. but job's status have not been changed to
Authored (but should be changed to Authored if number of tasks = 20).
To avoid this trap you can use SELECT null INTO dummy FROM JOBS WHERE job_id = :NEW.JOB_ID FOR UPDATE; in the after each row part of the trigger in order to place a lock at corresponding record in JOBS table in order to serialize access (I am commented it in the example above).
But I'am still not sure if this is correct solution - it may cause in turn some deadlocks in scenarious I cannot imagine and predict at this time.
In short - in Oracle, in triggers you cannot select from table on which trigger is constructed. Otherwise you may/you will get mutating table error.
You have several options:
1) No triggers at all - in my humble opinion this is the best, I would do it this way (but probably topic is wider and we don't know everything).
Create view, which replaces the need of triggers, something like:
create or replace view v_jobs_done as
select * from jobs where not exists
select 1 from tasks
where TASKS.JOB_ID = jobs.JOB_ID
and TASKS.TASK_STATUS not in ('Authored','Completed')
2) Instead of increasing value use decreasing value - so when jobs.tasks_count reaches zero you know everything's done. In this case you have to build/rebuild your other triggers,
3) Proposition close to yours - you can use modern compound trigger - I doubt in performance here, but it works:
create or replace trigger Job_Authored
for update of task_status on tasks compound trigger
type t_ids is table of tasks.job_id%type;
type t_cnts is table of number;
type t_job_counts is table of number index by varchar2(10);
v_ids t_ids;
v_cnts t_cnts;
v_job_counts t_job_counts;
before statement is
begin
select job_id, count(1)
bulk collect into v_ids, v_cnts
from tasks where tasks.task_status in ('Authored','Completed')
group by job_id;
for i in 1..v_ids.count() loop
v_job_counts(v_ids(i)) := v_cnts(i);
end loop;
end before statement;
after each row is
begin
if :new.task_status = 'Authored' then
update jobs set job_status='Authored'
where job_id = :new.job_id
and task_count = v_job_counts(:new.job_id);
end if;
end after each row;
end Job_Authored;
The best practice is to avoid triggers if you can.
See this links for an answer why one shoud not use triggers:
http://www.oracle.com/technetwork/testcontent/o58asktom-101055.html
http://rwijk.blogspot.com/2007/09/database-triggers-are-evil.html
Use a procedure (an API) in place of triggers - you can create a package with a few procedures like add_new_job, add_new_task, change_task_status etc., and place all logic in these procedures (checking, changing tasks' states, changing jobs' states etc.) in one place. Easy to understand, easy to maintain, easy to debug and easy to track errors.
If you insist on using triggers, then you can create the compound trigger, as Tom Kyte mentioned in the frist link above as workaround, for example:
create or replace TRIGGER JOB_AUTHORED
FOR UPDATE OF TASK_STATUS on TASKS
COMPOUND TRIGGER
TYPE JOB_ID_TABLE_TYPE IS TABLE OF PLS_INTEGER INDEX BY PLS_INTEGER;
JOB_ID_TABLE JOB_ID_TABLE_TYPE;
dummy CHAR;
AFTER EACH ROW IS
BEGIN
-- SELECT null INTO dummy FROM JOBS WHERE job_id = :NEW.JOB_ID FOR UPDATE;
JOB_ID_TABLE( :NEW.JOB_ID ) := :NEW.JOB_ID;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
FORALL x IN INDICES OF JOB_ID_TABLE
UPDATE jobs set JOB_STATUS='Authored'
WHERE JOBS.JOB_ID = JOB_ID_TABLE( x )
and TASK_COUNT=(
select count(0) from TASKS
where TASKS.JOB_ID = JOBS.JOB_ID
and TASKS.TASK_STATUS in ('Authored','Completed')
);
END AFTER STATEMENT;
END JOB_AUTHORED;
but .....
I am not sure if there are no any pitfals in this example we are not aware at this time.
For example one pitfall is in this scenario:
Suposse there are 18 tasks with status Authored
At time X user A runs UPDATE TASK SET status ' Authored' WHERE
task_id = 2. The trigger is fired and sees 18+1 commited tasks with
status Authored
At time X+10ms user B runs UPDATE TASK1 SET
status ' Authored' task_id = 4. The trigger is fired and sees 18+1
commited tasks with status Authored
At time X+20ms user A
commits
At time X+30ms user A commits
At the end we have 21 tasks
with status authored. but job's status have not been changed to
Authored (but should be changed to Authored if number of tasks = 20).
To avoid this trap you can use SELECT null INTO dummy FROM JOBS WHERE job_id = :NEW.JOB_ID FOR UPDATE; in the after each row part of the trigger in order to place a lock at corresponding record in JOBS table in order to serialize access (I am commented it in the example above).
But I'am still not sure if this is correct solution - it may cause in turn deadlocks in scenarious I cannot imagine and predict at now.

Oracle PL/SQL: a scheduled procedure, leading to firing of a trigger?

Okay, I'm new to Oracle PL/SQL and I've stumbled across a problem that I cannot figure out.
I have a procedure that leads to transferring data from one table to another and a trigger that activates on the insertion in the second table. I scheduled that procedure to run every minute (for testing - would be daily once I've figured it out), using the DBMS_JOB.SUBMIT - the scheduled part works perfectly, however after the completion of the procedure the trigger is not fired. I tried with before and after insert clauses, but it is still not working. If I call the procedure directly it works and it does fire the trigger just fine. So... I'm already wondering whether the scheduled procedure can fire the trigger at all?!
This is the schedule's code:
DECLARE
VJOBN BINARY_INTEGER;
BEGIN
DBMS_JOB.SUBMIT(
JOB => VJOBN,
INTERVAL => 'SYSDATE + 1/2880',
WHAT => 'BEGIN my_procedure(); END;'
);
END;
create or replace TRIGGER TO_PRJ
AFTER INSERT ON PROJECTS
FOR EACH ROW
BEGIN
IF INSERTING
THEN DBMS_OUTPUT.PUT_LINE('INSERTED PROJECT WITH ID: '||:NEW.PROJECT_ID||')
END IF;
END;​
Table PROJECTS has ID number, name varchar2, and some other that are not important.
The procedure transfers the ID and the name from orders to projects.
P.S. I'm using http://apex.oracle.com and when I get the timestamp from it the time is actually 6 hours behind me - not sure if it can be of any significance...
DBMS_OUTPUT and DBMS_JOB do not work the way you are trying to use them. The scheduled job is probably running, the trigger is firing - but since DBMS_OUTPUTneeds to be activated in the session that executes the DBMS_OUTPUT commands (i.e. the internal session used by DBMS_JOB) you will never see any output.
DBMS_OUTPUT's output is not visible across session, so the session that issues the DBMS_JOB.submit command will NOT receive the output, even if DBMS_OUTPUT is activated for that session.
Try using scheduler, it's much better then jobs. And bring there code of trigger and tables, it may help

Resources