Visibilty of inserted record in FORALL Bulk statement - oracle

I have 2 arrays which I will be using to use with FORALL to insert/update. I am not sure if i can use "MERGE" statement with FORALL so I am using two arrays.
While populating data in array, I check if its already in db. If it is, i put the populated data in an array which is meant to be updated otherwise it goes to insert array.
TYPE T_TABLE IS TABLE OF TABLE_1%ROWTYPE INDEX BY PLS_INTEGER;
TAB_I T_TABLE;
TAB_U T_TABLE;
--..more code
BEGIN
SELECT COUNT(*) INTO rec_exist FROM TABLE_1 where COL_1 = ID;
EXCEPTION
WHEN NO DATA FOUND THEN
rec_exist := 0;
END;
--.... more code
-- COL1 is PK
IF rec_exist = 0 THEN
TAB_I(IDX_I).COL1 = col1;
TAB_I(IDX_I).COL2 = col2;
IDX_I = IDX_I+ 1;
ELSE
TAB_U(IDX_U).COL1 = col1;
TAB_U(IDX_U).COL2 = col2;
IDX_U = IDX_U+ 1;
END IF:
I send these table to insert/update after every 1000 records.
Now the question is, that imagine I receive a record whcih doesn't exist already in table_1. I decide to put it in tab_i array. I receive the another record update for this record. Since it is not in table, i will decide to put in tab_i, which will then give me a problem when i insert it in forall loop.
Now if my forall loop is like
FORALL ..
INSERT
FORALL ..
UPDATE
COMMIT;
If, I update the record as part of "update" statement which is not in table yet but was insert in table as part of forall above it, would that update work ?

MERGE with FORALL - not supported, unless something has changed recently (12c perhaps?). The reasoning behind this is that a MERGE statement implicitly performs the same action as a FORALL because the USING clause can select multiple values from a variety of sources, including PL/SQL collections. I've never tried this
but I've seen examples, including this AskTom posting.
If you do the INSERTs before the UPDATEs, the INSERTed values will be visible when the UPDATEs are performed.
Share and enjoy.

Related

How to create TRIGGER with a reference to the triggered table?

Can I create an AFTER TRIGGER on a table and using that table in my SELECT query without getting mutating table error?
Example to a query I want to use.
This query will update number of times a certain status name is showing up in alert life cycle:
CREATE OR REPLACE TRIGGER COUNT_STEP
AFTER INSERT
ON STEPS
FOR EACH ROW
DECLARE
V_COUNT_SETP VARCHAR (10000);
BEGIN
SELECT COUNT (STATUS_NAME)
INTO V_COUNT_SETP
FROM (SELECT A.ALERT_ID, S.STATUS_NAME
FROM ALERTS A, ALERT_STATUSES S, STEPS ST
WHERE :NEW.ALERT_INTERNAL_ID = A.ALERT_INTERNAL_ID
AND ST.ALERT_STATUS_INTERNAL_ID = S.STATUS_INTERNAL_ID
AND S.STATUS_NAME IN ('Auto Escalate'))
GROUP BY ALERT_ID;
UPDATE ALERTS A
SET A.COUNT = V_COUNT_ESC
WHERE A.ALERT_INTERNAL_ID = :NEW.ALERT_INTERNAL_ID;
END;
/
The table I'm inserting a record to is also needed for counting the number of step occurrences since it's stores the alert id and all the steps id it had.
You need to be a bit more clearer in your questions. But, from what i understood, you need to create a trigger on a table, and perform a select for that same table. That gives you a mutanting table error. To bypass that, you need to perform a compound trigger on that table. Something like this:
create or replace trigger emp_ct
for insert on employees compound trigger
v_count number; -- Add variable here
before statement is
begin
-- PERFORM YOUR SELECT AND SEND TO A VARIABLE
end before statement;
after each row is
begin
-- DO WANT YOU WANTED TO DO. USE THE VARIABLE
end after each row;
end;
basically, with a compound trigger, you can capture every trigger event. By doing that, allows to query the table you're capturing.

Auditing a table with many columns without Fine Grained Auditing

I have to create a trigger for a table with many columns and I want to now if is any possibility to avoid using the name of the column after :new and :old. Instead of specifically use the column name I want to use the element from collection with column names of target table (the table on which the trigger is set).
The line 25 is that with the binding error:
DBMS_OUTPUT.PUT_LINE('Updating customer id'||col_name(i)||to_char(:new.col_name(i)));
Bellow you can see my trigger:
CREATE OR REPLACE TRIGGER TEST_TRG BEFORE
INSERT OR
UPDATE ON ITEMS REFERENCING OLD AS OLD NEW AS NEW FOR EACH ROW DECLARE TYPE col_list IS TABLE OF VARCHAR2(60);
col_name col_list := col_list();
total INTEGER;
counter INTEGER :=0;
BEGIN
SELECT COUNT(*)
INTO total
FROM user_tab_columns
WHERE table_name = 'ITEMS';
FOR rec IN
(SELECT column_name FROM user_tab_columns WHERE table_name = 'ITEMS'
)
LOOP
col_name.extend;
counter :=counter+1;
col_name(counter) := rec.column_name;
dbms_output.put_line(col_name(counter));
END LOOP;
dbms_output.put_line(TO_CHAR(total));
FOR i IN 1 .. col_name.count
LOOP
IF UPDATING(col_name(i)) THEN
DBMS_OUTPUT.PUT_LINE('Updating customer id'||col_name(i)||to_char(:new.col_name(i)));
END IF;
END LOOP;
END;
Sincerely,
After digging more I have found that is not possible to dynamically reference the :new.column_name or :old.column_name values in a trigger. Due to this I will use my code only to INSERT (it does not have an old value :-() and I will do some code in java to generate UPDATE statements.
I must refine my previous answer based on what has been said by Justin Cave and also my findings. We can create a dynamic list of values triggered by INSERTING and UPDATING, based on referencing clause (old and new). For example I have created 2 collections of type nested table with varchars. One collection will contain all column tabs, as strings, that I will use for auditing and another collection will contains values for that columns with binding reference (ex. :new.). After INSERTING predicate I have created a index by collection (an associative array) of strings with ID taken from list of strings with column tab name and the value taken from the list of values for that columns referenced by new. Due to the index by collection you have a full working dynamic list at your disposal. Good luck :-)

Attempting to solve mutating table issue with compound trigger

I am attempting to follow the instructions at Mutating Table Compound Trigger to update a parent table based on the id of a child table and avoid a mutating table error.
I need to obtain the parent id (BLATranscriptId) of the current record in the BLATChildren table and then after the update of the child record completes, I need to extract the current count that matches my criteria and perform an update on the parent table BLATranscript.
THIS ERROR HAS BEEN SOLVED - I'm getting an error that my bind variable "transcriptID" is bad in the "AFTER EACH ROW" portion of my code. I've verify that the BLATChildren.BLATranscriptId exists and is spelled correctly. Solution was to change AFTER EACH ROW to AFTER STATEMENT.
NEW ISSUE - Trigger is updating every record in the parent table, not just the matching parent record.
Any help would be greatly appreciated. Thank you.
CREATE OR REPLACE TRIGGER transcript_after_update
FOR UPDATE of enrollmentStatus,completionDate on blatChildren
COMPOUND TRIGGER
transcriptID number:=0;
BEFORE EACH ROW IS
BEGIN
transcriptID := :new.blaTranscriptid;
END BEFORE EACH ROW;
AFTER STATEMENT IS
BEGIN
update BLATranscript SET (enrollmentStatus, completionDate) = (select 'C',sysdate from dual)
where id in (select blat.id from BLATranscript blat
inner join BlendedActivity bla on blat.blendedactivityid=bla.id
where blat.id=transcriptID and minCompletion<=(
select count(countForCompletion) as total from blatChildren blac
inner join BlendedActivityMembership bam on blac.childActivityId=bam.childActivityId
where completionDate>=sysdate-acceptPrevWork
and blat.id=transcriptID
and blac.enrollmentStatus='C'));
END AFTER STATEMENT;
END;
/
You need for each row part to collect :new.blaTranscriptid int PL/SQL table
and after statement part that will perform update using this PL/SQL table.
Like:
CREATE OR REPLACE TRIGGER transcript_after_update
FOR UPDATE of enrollmentStatus,completionDate on blatChildren
COMPOUND TRIGGER
TYPE transcriptIDs_t IS TABLE OF blatChildren.blaTranscriptid%TYPE;
transcriptID transcriptIDs_t := transcriptIDs_t();
BEFORE EACH ROW IS
BEGIN
transcriptID.extend;
transcriptID(transcriptID.count) := :new.blaTranscriptid;
END BEFORE EACH ROW;
AFTER STATEMENT IS
BEGIN
FOR i IN 1..transcriptID.COUNT
LOOP
update BLATranscript SET (enrollmentStatus, completionDate) = (select 'C',sysdate from dual)
where id in (select blat.id from BLATranscript blat
inner join BlendedActivity bla on blat.blendedactivityid=bla.id
where blat.id = transcriptID(i) and minCompletion<=(
select count(countForCompletion) as total from blatChildren blac
inner join BlendedActivityMembership bam on blac.childActivityId=bam.childActivityId
where completionDate>=sysdate-acceptPrevWork
and blat.id=transcriptID(i)
and blac.enrollmentStatus='C'));
END LOOP;
END AFTER STATEMENT;
END;
/

Insert in target table and then update the source table field in oracle

In Oracle, I have a requirement where in I need to insert records from Source to Target and then update the PROCESSED_DATE field of source once the target has been updated.
1 way is to use cursors and loop row by row to achieve the same.
Is there any other way to do the same in an efficient way?
No need for a cursor. Assuming you want to transfer those rows that have not yet been transfered (identified by a NULL value in processed_date).
insert into target_table (col1, col2, col3)
select col1, col2, col3
from source_table
where processed_date is null;
update source_table
set processed_date = current_timestamp
where processed_date is null;
commit;
To avoid updating rows that were inserted during the runtime of the INSERT or between the INSERT and the update, start the transaction in serializable mode.
Before you run the INSERT, start the transaction using the following statement:
set transaction isolation level SERIALIZABLE;
For more details see the manual:
http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_10005.htm#i2067247
http://docs.oracle.com/cd/E11882_01/server.112/e25789/consist.htm#BABCJIDI
A trigger should work. The target table can have a trigger that on update, updates the source table's column with the processed date.
My preferred solution in this sort of instance is to use a PL/SQL array along with batch DML, e.g.:
DECLARE
CURSOR c IS SELECT * FROM tSource;
TYPE tarrt IS TABLE OF c%ROWTYPE INDEX BY BINARY_INTEGER;
tarr tarrt;
BEGIN
OPEN c;
FETCH c BULK COLLECT INTO tarr;
CLOSE c;
FORALL i IN 1..tarr.COUNT
INSERT INTO tTarget VALUES tarr(i);
FORALL i IN 1..tarr.COUNT
UPDATE tSource SET processed_date = SYSDATE
WHERE tSource.id = tarr(i).id;
END;
The above code is an example only and makes some assumptions about the structure of your tables.
It first queries the source table, and will only insert and update those records - which means you don't need to worry about other sessions concurrently inserting more records into the source table while this is running.
It can also be easily changed to process the rows in batches (using the fetch LIMIT clause and a loop) rather than all-at-once like I have here.
Got another answer from some one else. Thought that solution seems much more reasonable than enabling isolation level as all my new records will have the PROCESSED_DATE as null (30 rows which inserted with in the time the records got inserted in Target table)
Also the PROCESSED_DATE = NULL rows can be updated only by using my job. No other user can update these records at any point of time.
declare
date_stamp date;
begin
select sysdate
into date_stamp
from dual;
update source set processed_date = date_stamp
where procedded_date is null;
Insert into target
select * from source
where processed_date = date_stamp;
commit;
end;
/
Let me know any further thoughts on this. Thanks a lot for all your help on this.

Can a table variable be used in a select statement where clause?

I have a stored procedure that is doing a two-step query. The first step is to gather a list of VARCHAR2 type characters from a table and collect them into a table variable, defined like this:
TYPE t_cids IS TABLE OF VARCHAR2(50) INDEX BY PLS_INTEGER;
v_cids t_cids;
So basically I have:
SELECT item BULK COLLECT INTO v_cids FROM table_one;
This works fine up until the next bit.
Now I want to use that collection in the where clause of another query within the same procedure, like so:
SELECT * FROM table_two WHERE cid IN v_cids;
Is there a way to do this? I am able to select an individual element, but I would like to use the table variable like a would use a regular table. I've tried variations using nested selects, but that doesn't seem to work either.
Thanks a lot,
Zach
You have several choices as to how you achieve this.
If you want to use a collection, then you can use the TABLE function to select from it but the type of collection you use becomes important.
for a brief example, this creates a database type that is a table of numbers:
CREATE TYPE number_tab AS TABLE OF NUMBER
/
Type created.
The next block then populates the collection and performs a rudimentary select from it using it as a table and joining it to the EMP table (with some output so you can see what's happening):
DECLARE
-- Create a variable and initialise it
v_num_tab number_tab := number_tab();
--
-- This is a collection for showing the output
TYPE v_emp_tabtype IS TABLE OF emp%ROWTYPE
INDEX BY PLS_INTEGER;
v_emp_tab v_emp_tabtype;
BEGIN
-- Populate the number_tab collection
v_num_tab.extend(2);
v_num_tab(1) := 7788;
v_num_tab(2) := 7902;
--
-- Show output to prove it is populated
FOR i IN 1 .. v_num_tab.COUNT
LOOP
dbms_output.put_line(v_num_tab(i));
END LOOP;
--
-- Perform a select using the collection as a table
SELECT e.*
BULK COLLECT INTO v_emp_tab
FROM emp e
INNER JOIN TABLE(v_num_tab) nt
ON (e.empno = nt.column_value);
--
-- Display the select output
FOR i IN 1 .. v_emp_tab.COUNT
LOOP
dbms_output.put_line(v_emp_tab(i).empno||' is a '||v_emp_tab(i).job);
END LOOP;
END;
You can see from this that the database TYPE collection (number_tab) was treated as a table and could be used as such.
Another option would be to simply join your two tables you are selecting from in your example:
SELECT tt.*
FROM table_two tt
INNER JOIN table_one to
ON (to.item = tt.cid);
There are other ways of doing this but the first might suit your needs best.
Hope this helps.
--Doesn't work.
--SELECT item BULK COLLECT AS 'mySelectedItems' INTO v_cids FROM table_one;
SELECT table_two.*
FROM table_two INNER JOIN v_cids
ON table_two.paramname = v_cids.mySelectedItems;
Unless I'm misunderstanding the question, this should only return results that are in the table variable.
Note: I've never used Oracle, but I imagine this case would be the same.

Resources