PLSQL Loop through query finds all the records - oracle

i'm facing a problem with a stored procedure, the thing is i'm trying to find if the transaction number in the temporary table is already in the final table, if not it will insert the record, if it's in the final table, it's going to a log_error table, here's my SP
BEGIN
DECLARE
date temporary_table.transfer_date%TYPE;
auth temporary_table.auth_code%TYPE;
transac_num temporary_table.transaction_number%TYPE;
card temporary_table.card_number%TYPE;
amount temporary_table.amount%TYPE;
num_trx_search NUMBER;
counter NUMBER;
sid1 NUMBER;
sid2 NUMBER;
loopcounter NUMBER;
BEGIN
cod_error := 0;
warning := 'execution';
OPEN vocursor FOR
SELECT transfer_date,
auth_code,
transaction_number,
card_number,
amount
FROM temporary_table order by id;
prfcursor := vocursor;
OPEN ntxcursor FOR
SELECT transaction_number FROM final_table order by id;
trxcursor := ntxcursor;
LOOP
FETCH prfcursor INTO date, auth, transac_num, card, amount;
EXIT WHEN prfcursor%NOTFOUND;
FETCH trxcursor INTO num_trx_search;
dbms_output.Put_line('NumTrx: ' || num_trx);
begin
-- i need to check if the transaction number from the temporary table is already in the
--final table
FOR loopcounter IN (Select id from final_table where transaction_number = transac_num)
LOOP
DBMS_OUTPUT.PUT_LINE(loopcounter.sid);
END LOOP;
dbms_output.Put_line('num_trx_search: ' || num_trx_search);
dbms_output.Put_line('counter: ' || counter);
exception
WHEN NO_DATA_FOUND THEN
dbms_output.Put_line('No Data found');
end;
EXIT WHEN trxcursor%NOTFOUND;
--just for testing and debuging
counter := 1;
IF(counter > 0) THEN
--inserts into log error table
ELSE
--inserts into final table
END IF;
END LOOP;
dbms_output.Put_line( 'end loop' );
CLOSE trxcursor;
CLOSE prfcursor;
dbms_output.Put_line( 'end cursor' );
END;
The thing is, it's getting all the results, for each record in the temporary, should get just one if the transaction number matches.
NumTrx is the transaction number in the temporary table.
I'm a noob in plsql, thanks

You could achieve the same thing by trying to insert the records from the temporary table into the final table and use the LOG ERRORS INTO clause to push those records that are already in final into another table.
INSERT INTO final_table final
SELECT transfer_date,
auth_code,
transaction_number,
card_number,
amount
FROM temporary_table
LOG ERRORS INTO ERR$_final_table
The query above assumes that final_table and temporary_table have the same structure. If they are different you will need to adjust the query slightly. Generally you should try to do as much of what you want in a single SQL rather than writing lots of procedural code to achieve the same thing. It is usually quicker and in this case appears to be much simpler.
For the set up of the ERR$ table I suggest you look at the Oracle docs under DML ERROR LOGGING.
If you do wish to do a row-by-row (slow) update then I would suggest using implicit cursor for loops instead simply for readability. Also I don't think the ORDER BY on each cursor is going to do anything except slow your code down.

Related

Statement level trigger to enforce a constraint

I am trying to implement a statement level trigger to enforce the following "An applicant cannot apply for more than two positions in one day".
I am able to enforce it using a row level trigger (as shown below) but I have no clue how to do so using a statement level trigger when I can't use :NEW or :OLD.
I know there are alternatives to using a trigger but I am revising for my exam that would have a similar question so I would appreciate any help.
CREATE TABLE APPLIES(
anumber NUMBER(6) NOT NULL, /* applicant number */
pnumber NUMBER(8) NOT NULL, /* position number */
appDate DATE NOT NULL, /* application date*/
CONSTRAINT APPLIES_pkey PRIMARY KEY(anumber, pnumber)
);
CREATE OR REPLACE TRIGGER app_trigger
BEFORE INSERT ON APPLIES
FOR EACH ROW
DECLARE
counter NUMBER;
BEGIN
SELECT COUNT(*) INTO counter
FROM APPLIES
WHERE anumber = :NEW.anumber
AND to_char(appDate, 'DD-MON-YYYY') = to_char(:NEW.appDate, 'DD-MON-YYYY');
IF counter = 2 THEN
RAISE_APPLICATION_ERROR(-20001, 'error msg');
END IF;
END;
You're correct that you don't have :OLD and :NEW values - so you need to check the entire table to see if the condition (let's not call it a "constraint", as that term has specific meaning in the sense of a relational database) has been violated:
CREATE OR REPLACE TRIGGER APPLIES_AIU
AFTER INSERT OR UPDATE ON APPLIES
BEGIN
FOR aRow IN (SELECT ANUMBER,
TRUNC(APPDATE) AS APPDATE,
COUNT(*) AS APPLICATION_COUNT
FROM APPLIES
GROUP BY ANUMBER, TRUNC(APPDATE)
HAVING COUNT(*) > 2)
LOOP
-- If we get to here it means we have at least one user who has applied
-- for more than two jobs in a single day.
RAISE_APPLICATION_ERROR(-20002, 'Applicant ' || aRow.ANUMBER ||
' applied for ' || aRow.APPLICATION_COUNT ||
' jobs on ' ||
TO_CHAR(aRow.APPDATE, 'DD-MON-YYYY'));
END LOOP;
END APPLIES_AIU;
It's a good idea to add an index to support this query so it will run efficiently:
CREATE INDEX APPLIES_BIU_INDEX
ON APPLIES(ANUMBER, TRUNC(APPDATE));
dbfiddle here
Best of luck.
Your rule involves more than one row at the same time. So you cannot use a FOR ROW LEVEL trigger: querying on APPLIES as you propose would hurl ORA-04091: table is mutating exception.
So, AFTER statement it is.
CREATE OR REPLACE TRIGGER app_trigger
AFTER INSERT OR UPDATE ON APPLIES
DECLARE
cursor c_cnt is
SELECT 1 INTO counter
FROM APPLIES
group by anumber, trunc(appDate) having count(*) > 2;
dummy number;
BEGIN
open c_cnt;
fetch c_cnt in dummy;
if c_cnt%found then
close c_cnt;
RAISE_APPLICATION_ERROR(-20001, 'error msg');
end if;
close c_cnt;
END;
Obviously, querying the whole table will be inefficient at scale. (One of the reasons why triggers are not recommended for this sort of thing). So this is a situation in which we might want to use a compound trigger (assuming we're on 11g or later).

Bulk inserting in Oracle PL/SQL

I have around 5 million of records which needs to be copied from table of one schema to table of another schema(in the same database). I have prepared a script but it gives me the below error.
ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define
Following is my script
DECLARE
TYPE tA IS TABLE OF varchar2(10) INDEX BY PLS_INTEGER;
TYPE tB IS TABLE OF SchemaA.TableA.band%TYPE INDEX BY PLS_INTEGER;
TYPE tD IS TABLE OF SchemaA.TableA.start_date%TYPE INDEX BY PLS_INTEGER;
TYPE tE IS TABLE OF SchemaA.TableA.end_date%TYPE INDEX BY PLS_INTEGER;
rA tA;
rB tB;
rD tD;
rE tE;
f number :=0;
BEGIN
SELECT col1||col2||col3 as main_col, band, effective_start_date as start_date, effective_end_date as end_date
BULK COLLECT INTO rA, rB, rD, rE
FROM schemab.tableb;
FORALL i IN rA.FIRST..rE.LAST
insert into SchemaA.TableA(main_col, BAND, user_type, START_DATE, END_DATE, roll_no)
values(rA(i), rB(i), 'C', rD(i), rE(i), 71);
f:=f+1;
if (f=10000) then
commit;
end if;
end;
Could you please help me in finding where the error lies?
Why not a simple
insert into SchemaA.TableA (main_col, BAND, user_type, START_DATE, END_DATE, roll_no)
SELECT col1||col2||col3 as main_col, band, 'C', effective_start_date, effective_end_date, 71
FROM schemab.tableb;
This
f:=f+1;
if (f=10000) then
commit;
end if;
does not make any sense. f becomes 1 - that's it. f=10000 will never be true, thus you don't make a COMMIT.
Following script worked for me and i was able to load around 5 millions of data within 15 minutes.
ALTER SESSION ENABLE PARALLEL DML
/
DECLARE
cursor c_p1 is
SELECT col1||col2||col3 as main_col, band, effective_start_date as start_date, effective_end_date as end_date
FROM schemab.tableb;
TYPE TY_P1_FULL is table of c_p1%rowtype
index by pls_integer;
v_P1_FULL TY_P1_FULL;
v_seq_num number;
BEGIN
open c_p1;
loop
fetch c_p1 BULK COLLECT INTO v_P1_FULL LIMIT 10000;
exit when v_P1_FULL.count = 0;
FOR i IN 1..v_P1_FULL.COUNT loop
INSERT /*+ APPEND */ INTO schemaA.tableA VALUES (v_P1_FULL(i));
end loop;
commit;
end loop;
close c_P1;
dbms_output.put_line('Load completed');
end;
-- Disable parallel mode for this session
ALTER SESSION DISABLE PARALLEL DML
/
ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define
You get that error because you have a literal in the VALUES clause of the INSERT. The FORALL expects everything to be bind to an array.
Your program has a massive problem, literally. You have no LIMIT on the BULK COLLECT clause, so that's going to try to load all five million records from TableB into your collections. That will blow your session's memory limit.
The point of using BULK COLLECT and FORALL is to bite off chunks of a bigger data set and process it in batches. For that you need a loop. The loop has no FOR condition: instead test whether the fetch returned anything and exit when the array has zero records.
DECLARE
TYPE recA IS RECORD (
main_col SchemaA.TableA.main_col%TYPE
, band SchemaA.TableA.band%TYPE
, start_date date
, end_date date
, roll_ni number);
TYPE recsA is table of recA
nt_a recsA;
f number :=0;
CURSOR cur_b is
SELECT col1||col2||col3 as main_col,
band,
effective_start_date as start_date,
effective_end_date as end_date ,
71 as roll_no
FROM schemab.tableb;
BEGIN
open cur_b;
loop
fetch curb_b bulk collect into nt_a limit 1000;
exit when nt_a.count() = 0;
FORALL i IN rA.FIRST..rE.LAST
insert into SchemaA.TableA(main_col, BAND, user_type, START_DATE, END_DATE, roll_no)
values nt_a(i);
f := f + sql%rowcount;
if (f > = 10000) then
commit;
f := 0;
end if;
end loop;
commit;
close cur_b;
end;
Please note that issuing commits inside a loop is contraindicated. You lay yourself open to runtime errors such as ORA-01002 and ORA-01555. If your program does crash half-way through you will have great difficulty in resuming it without problems. By all means persist if you have problems with UNDO tablespace, but the correct answer is to get the DBA to enlarge the UNDO tablespace not weaken your code.
"i am using bulk insert because it gives better performance"
It is true that BULK COLLECT and FORALL ... INSERT is more performative than a CURSOR FOR loop with row-by-row single inserts. It is not more efficient than a pure SQL INSERT INTO ... SELECT. The value of the construct is that it allows us to manipulate the contents of the array before inserting it. This is handle if we have complex business rules which can only be applied programmatically.
Please try after changing first 2 line of your code with below:
DECLARE
TYPE tA IS TABLE OF SchemaA.TableA.main_col%TYPE INDEX BY PLS_INTEGER;
...
...
This may be because of data type/length mismatch. In declaration section you have missed to declare one to inherit type from table.
Also as mentioned, f logic for commit will not do the magic for you. Better you should use LIMIT with BULL COLLECT

Oracle PL/SQL how do you output how many inserts have been made in a FORALL statement

What's the best way of getting and outputting how many rows have been inserted in the FORALL statement I have below. I've seen the SQL%BULK_ROWCOUNT but I'm not sure how that would work in the below statement.
is it
DBMS_OUTPUT.('rows inserted '||SQL%BULK_ROWCOUNT||'');
Does the above need to go in another FORALL statement? For the code below how would I achieve this?
DECLARE
TYPE t_arc_act_plus_trigger1 IS TABLE OF arc_act_plus_triggers1%ROWTYPE;
v_arc_act_plus_triggers1 t_arc_act_plus_trigger1;
CURSOR c_arc_act_plus_triggers1 IS
SELECT /*+ PARALLEL */ apt.*
FROM act_plus_triggers1 apt
WHERE NOT EXISTS
(SELECT 1
FROM act_plus_triggers_copy1 aptc
WHERE aptc.surr_id = apt.surr_id)
AND apt.status IN ('EXT', 'EXP');
BEGIN
OPEN c_arc_act_plus_triggers1;
LOOP
FETCH c_arc_act_plus_triggers1 BULK COLLECT INTO v_arc_act_plus_triggers1 LIMIT 10000; -- limit to 10k to avoid out of memory
FORALL i IN 1..v_arc_act_plus_triggers1.COUNT
INSERT /*+ APPEND_VALUES */ INTO arc_act_plus_triggers1 values v_arc_act_plus_triggers1(i);
Com0932.get_parameter ('ACT_ARCHIVE_TRIGGER_STOP_YN',l_STOP_PROGRAM_YN);
IF l_STOP_PROGRAM_YN = 'Y' THEN
p_location('insert_into_arc_act_plus - STOP_PROGRAM_YN flag = '||l_STOP_PROGRAM_YN||' so ROLLBACK');
ROLLBACK;
EXIT;
END IF;
-- **************************************************
-- Output how many records have been inserted here???
-- **************************************************
-- commit after every 10000 records into arc_act_plus_triggers1
COMMIT;
EXIT WHEN c_arc_act_plus_triggers1%NOTFOUND;
END LOOP;
CLOSE c_arc_act_plus_triggers1;
END;
I haven't checked as I have nothing to test against so please forgive any 'missing semi-colon type errors' and I'm afraid I'm not in a position to performance check this.
Your code seems to select which rows to insert to the archive table based on there non-existence in the archive. Therefore simply use an INSERT based on a SELECT limited by a suitable ROWNUM value. Once you commit then the next time round the loop it wont try getting already archived rows as you just committed them.
I think this should be as quick if not quicker than bulkifying the inserts with the advantage that its simpler - Occams Razor and all that.
DECLARE
l_commit_count NUMBER := 10000;
l_rows_copied NUMBER := 0;
BEGIN
DBMS_OUTPUT.PUT_LINE('Started at '||TO_DATE(SYSDATE, 'DD_MON_YYY HH24:MI:SS');
LOOP
INSERT /*+APPEND */
INTO c_arc_act_plus_triggers1
SELECT /*+ PARALLEL */ apt.*
FROM act_plus_triggers1 apt
WHERE NOT EXISTS
(SELECT 1
FROM act_plus_triggers_copy1 aptc
WHERE aptc.surr_id = apt.surr_id)
AND apt.status IN ('EXT', 'EXP')
AND rownum < l_commit_count;
COMMIT;
l_rows := l_rows + SQL%ROWCOUNT;
EXIT WHEN SQL%ROWCOUNT < 1;
END LOOP
DBMS_OUTPUT.PUT_LINE('Finished at '||TO_DATE(SYSDATE, 'DD_MON_YYY HH24:MI:SS');
DBMS_OUTPUT.PUT_LINE(TO_CHAR(l_rows)||' rows copied to the archive table');
END;

Best practices to purge millions of data in oracle

I need to 1 billion data which 10 years before records from a table tblmail , for that I have created the below procedure.
I am doing through batch size.
CREATE OR REPLACE PROCEDURE PURGE_Data AS
batch_size INTEGER := 1000;
pvc_procedure_name CONSTANT VARCHAR2(50) := 'Purge_data';
pvc_info_message_num CONSTANT NUMBER := 1;
pvc_error_message_type CONSTANT VARCHAR2(5) := 'ERROR';
v_message schema_mc.db_msg_log.message%TYPE;
v_msg_num schema_mc.db_msg_log.msg_num%TYPE;
/*
Purpose: Provide stored procedures to be used to purge unwanted archives.
*/
BEGIN
Delete from tblmail where createdate_dts < (SYSDATE - INTERVAL '10' YEAR) and ROWNUM <= batch_size;
COMMIT;
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
v_msg_num := SQLCODE;
v_message := 'Error deleting from tblmail table';
INSERT INTO error_log
(date, num, type, source, mail)
VALUES
(systimestamp, v_msg_num, pvc_error_message_type,pvc_procedure_name, v_message);
COMMIT;
END;
Do I need to use bulk collect and delete? What is the best way to do this?
As always in computing it depends. Provided that you have an index on createdate_dts your procedure should work, but how do you know when to stop calling it? I tend to use a loop:
loop
delete /*+ first_rows */ from tblmail where createdate_dts <
(SYSDATE - INTERVAL '10' YEAR) and ROWNUM <= batch_size;
v_rows := SQL%ROWCOUNT;
commit;
exit when v_rows = 0;
end loop;
You could also return the number of deleted records if you want to keep the loop outside of the procedure. Without an index on createdate_dts it may be cheaper to collect the primary keys for the rows to delete in one pass first and then loop over them, deleting batch size records per commit with bulk collect or something. However, when possible it is always nice to use a simple solution! You may want to experiment a bit in order to find the best batch size.

Oracle INSERT performance for large chunks of data

I am developing stored procedure for Oracle 10g and I am hitting pretty heavy performance bottle neck while trying to pass list of about 2-3k items into procedure. Here's my code:
TYPE ty_id_list
AS TABLE OF NUMBER(11);
----------------------------------------------------------
PROCEDURE sp_performance_test (
p_idsCollection IN schema.ty_id_list )
AS
TYPE type_numeric_table IS TABLE OF NUMBER(11) INDEX BY BINARY_INTEGER;
l_ids type_numeric_table;
data type_numeric_table;
empty type_numeric_table;
BEGIN
EXECUTE IMMEDIATE
'ALTER TABLE schema.T_TEST_TABLE NOLOGGING';
COMMIT;
SELECT COLUMN_VALUE BULK COLLECT INTO l_ids
FROM TABLE(p_idsCollection);
FOR j IN 1 .. l_ids.COUNT LOOP
data(data.count+1) := l_ids(j);
IF(MOD(data.COUNT,500) = 0 ) THEN
FORALL i IN 1 .. data.COUNT
INSERT INTO schema.T_TEST_TABLE (REF_ID, ACTIVE)
VALUES (data(i), 'Y');
data := empty;
END IF;
END LOOP;
IF(data.count IS NOT NULL) THEN
FORALL i IN 1 .. data.COUNT
INSERT INTO schema.T_TEST_TABLE (REF_ID, ACTIVE)
VALUES (data(i), 'Y');
END IF;
COMMIT;
EXECUTE IMMEDIATE
'ALTER TABLE schema.T_TEST_TABLE LOGGING';
COMMIT;
END sp_performance_test;
So the thing that adds up to the process quite drastically seems to be this part: data(data.count+1) := l_ids(j); If I skip getting element from the collection and change this line to data(data.count+1) := j;, procedure execution time will be 3-4 times faster (from over 30s to 8-9s for 3k items) - but this logic obviously is not the one i want.
Can You guys give me a hint where could I improve my code to get better performance on inserting data? If any improvements can be done really.
Thanks,
Przemek
I don't follow your logic.
You accept a collection. You copy it to another collection:
SELECT COLUMN_VALUE BULK COLLECT INTO l_ids
FROM TABLE(p_idsCollection);
And then you copy it once again, in a loop:
FOR j IN 1 .. l_ids.COUNT LOOP
data(data.count+1) := l_ids(j);
And only after that you manage to perform your 500-row-chunk bulk insert. What is wrong with bulk inserting p_idsCollection immediately?
p.s. You don't need commits after 'ALTER TABLE', ddl statements issue them implicitly.
that whole block after the DDL can be rewritten as
insert into schema.T_TEST_TABLE (REF_ID, ACTIVE)
select COLUMN_VALUE, 'Y' FROM TABLE(p_idsCollection);
You can also add hint into insert operation.
Insert /*+ append */ into tab (...) values (...)
It's change oracle work logic and it will work faster.
http://www.dba-oracle.com/t_insert_tuning.htm

Resources