Oracle PL/SQL how do you output how many inserts have been made in a FORALL statement - oracle

What's the best way of getting and outputting how many rows have been inserted in the FORALL statement I have below. I've seen the SQL%BULK_ROWCOUNT but I'm not sure how that would work in the below statement.
is it
DBMS_OUTPUT.('rows inserted '||SQL%BULK_ROWCOUNT||'');
Does the above need to go in another FORALL statement? For the code below how would I achieve this?
DECLARE
TYPE t_arc_act_plus_trigger1 IS TABLE OF arc_act_plus_triggers1%ROWTYPE;
v_arc_act_plus_triggers1 t_arc_act_plus_trigger1;
CURSOR c_arc_act_plus_triggers1 IS
SELECT /*+ PARALLEL */ apt.*
FROM act_plus_triggers1 apt
WHERE NOT EXISTS
(SELECT 1
FROM act_plus_triggers_copy1 aptc
WHERE aptc.surr_id = apt.surr_id)
AND apt.status IN ('EXT', 'EXP');
BEGIN
OPEN c_arc_act_plus_triggers1;
LOOP
FETCH c_arc_act_plus_triggers1 BULK COLLECT INTO v_arc_act_plus_triggers1 LIMIT 10000; -- limit to 10k to avoid out of memory
FORALL i IN 1..v_arc_act_plus_triggers1.COUNT
INSERT /*+ APPEND_VALUES */ INTO arc_act_plus_triggers1 values v_arc_act_plus_triggers1(i);
Com0932.get_parameter ('ACT_ARCHIVE_TRIGGER_STOP_YN',l_STOP_PROGRAM_YN);
IF l_STOP_PROGRAM_YN = 'Y' THEN
p_location('insert_into_arc_act_plus - STOP_PROGRAM_YN flag = '||l_STOP_PROGRAM_YN||' so ROLLBACK');
ROLLBACK;
EXIT;
END IF;
-- **************************************************
-- Output how many records have been inserted here???
-- **************************************************
-- commit after every 10000 records into arc_act_plus_triggers1
COMMIT;
EXIT WHEN c_arc_act_plus_triggers1%NOTFOUND;
END LOOP;
CLOSE c_arc_act_plus_triggers1;
END;

I haven't checked as I have nothing to test against so please forgive any 'missing semi-colon type errors' and I'm afraid I'm not in a position to performance check this.
Your code seems to select which rows to insert to the archive table based on there non-existence in the archive. Therefore simply use an INSERT based on a SELECT limited by a suitable ROWNUM value. Once you commit then the next time round the loop it wont try getting already archived rows as you just committed them.
I think this should be as quick if not quicker than bulkifying the inserts with the advantage that its simpler - Occams Razor and all that.
DECLARE
l_commit_count NUMBER := 10000;
l_rows_copied NUMBER := 0;
BEGIN
DBMS_OUTPUT.PUT_LINE('Started at '||TO_DATE(SYSDATE, 'DD_MON_YYY HH24:MI:SS');
LOOP
INSERT /*+APPEND */
INTO c_arc_act_plus_triggers1
SELECT /*+ PARALLEL */ apt.*
FROM act_plus_triggers1 apt
WHERE NOT EXISTS
(SELECT 1
FROM act_plus_triggers_copy1 aptc
WHERE aptc.surr_id = apt.surr_id)
AND apt.status IN ('EXT', 'EXP')
AND rownum < l_commit_count;
COMMIT;
l_rows := l_rows + SQL%ROWCOUNT;
EXIT WHEN SQL%ROWCOUNT < 1;
END LOOP
DBMS_OUTPUT.PUT_LINE('Finished at '||TO_DATE(SYSDATE, 'DD_MON_YYY HH24:MI:SS');
DBMS_OUTPUT.PUT_LINE(TO_CHAR(l_rows)||' rows copied to the archive table');
END;

Related

SQL%ROWCOUNT is not providing desired outcome with dynamic SQL Having BULK COLLECT and FORALL in Oracle PL/SQL

SQL%ROWCOUNT is returning the count considered(10) for the run, not the exact number of records updated. Expectation is that SQL%ROWCOUNT should provide the actual number of records updated . Please suggest me how to achieve the task.
Code which triggers dynamic SQL
FORALL indx IN 1 .. l_account_data.COUNT --assume 10 as count
SAVE EXCEPTIONS
EXECUTE IMMEDIATE dynamic_sql_query USING l_account_data (indx);
DBMS_OUTPUT.put_line ('Successful UPDATE of '|| TO_CHAR (SQL%ROWCOUNT) || ' record');
COMMIT;
dynamic_sql_query
BEGIN
SELECT clmn_x, clmn_y
BULK COLLECT INTO l_subscr_data
FROM table_x e, table_y c
WHERE c.ref_id = :account_no AND e.account_no = c.account_no;
FORALL indx IN 1 .. l_subscr_data.COUNT
UPDATE table_z ciem --this update will update multiple records for each account
SET ciem.ext_id = ciem.sub_no || ROWID
WHERE ciem.sub_no = l_subscr_data (indx).clmn_x
AND ciem.subscr_no_resets = l_subscr_data (indx).clmn_y
AND ciem.status IN (1,2);
END;
Your outer execute immediate call isn't aware of what is happening inside the dynamic SQL; it doesn't know what it's doing, or how many rows it may or may not have affected.
To get an accurate count you would need to modify your dynamic statement to add something like:
FOR indx IN 1 .. l_subscr_data.COUNT LOOP
:total_count := :total_count + coalesce(SQL%BULK_ROWCOUNT(indx), 0);
END LOOP;
and change your outer call to (a) pass an extra IN OUT bind variable to track the total count, and (b) use a FOR LOOP rather than FORALL, because that only seems to retain the value after the first dynamic call (not sure if that's documented, or a bug). So something like:
...
l_total_count number := 0;
...
FOR indx IN 1 .. l_account_data.COUNT LOOP
EXECUTE IMMEDIATE dynamic_sql_query
USING l_account_data (indx), in out l_total_count;
END LOOP;
DBMS_OUTPUT.put_line ('Successful UPDATE of '|| TO_CHAR (l_total_count) || ' record');
db<>fiddle demo with made-up data.

Strange behaviour of BULK COLLECT

I tinkered together the following PL/SQL BULK-COLLECT which works astonishingly fast for updates on huge tables (>50.000.000). The only problem is, that it does not perform the updates of the remaining < 5000 rows per table. 5000 is the given limit for the FETCH instruction:
DECLARE
-- source table cursor (only columns to be updated)
CURSOR base_table_cur IS
select a.rowid, TARGET_COLUMN from TARGET_TABLE a
where TARGET_COLUMN is null;
TYPE base_type IS
TABLE OF base_table_cur%rowtype INDEX BY PLS_INTEGER;
base_tab base_type;
-- new data
CURSOR new_data_cur IS
select a.rowid,
coalesce(b.SOURCE_COLUMN, 'FILL_VALUE'||a.JOIN_COLUMN) TARGET_COLUMN from TARGET_TABLE a
left outer join SOURCE_TABLE b
on a.JOIN_COLUMN=b.JOIN_COLUMN
where a.TARGET_COLUMN is null;
TYPE new_data_type IS TABLE OF new_data_cur%rowtype INDEX BY PLS_INTEGER;
new_data_tab new_data_type;
TYPE row_id_type IS TABLE OF ROWID INDEX BY PLS_INTEGER;
row_id_tab row_id_type;
TYPE rt_update_cols IS RECORD (
TARGET_COLUMN TARGET_TABLE.TARGET_COLUMN%TYPE
);
TYPE update_cols_type IS
TABLE OF rt_update_cols INDEX BY PLS_INTEGER;
update_cols_tab update_cols_type;
dml_errors EXCEPTION;
PRAGMA exception_init ( dml_errors,-24381 );
BEGIN
OPEN base_table_cur;
OPEN new_data_cur;
LOOP
FETCH base_table_cur BULK COLLECT INTO base_tab LIMIT 5000;
IF base_table_cur%notfound THEN
DBMS_OUTPUT.PUT_LINE('Nothing to update. Exiting.');
EXIT;
END IF;
FETCH new_data_cur BULK COLLECT INTO new_data_tab LIMIT 5000;
FOR i IN base_tab.first..base_tab.last LOOP
row_id_tab(i) := new_data_tab(i).rowid;
update_cols_tab(i).TARGET_COLUMN := new_data_tab(i).TARGET_COLUMN;
END LOOP;
FORALL i IN base_tab.first..base_tab.last SAVE EXCEPTIONS
UPDATE (SELECT TARGET_COLUMN FROM TARGET_TABLE)
SET row = update_cols_tab(i)
WHERE ROWID = row_id_tab(i);
COMMIT;
EXIT WHEN base_tab.count < 5000; -- changing to 1 didn't help!
END LOOP;
COMMIT;
CLOSE base_table_cur;
CLOSE new_data_cur;
EXCEPTION
WHEN dml_errors THEN
FOR i IN 1..SQL%bulk_exceptions.count LOOP
dbms_output.put_line('Some error occured');
END LOOP;
END;
Where is my mistake? It looks correct to me though.
The problem is this line:
IF base_table_cur%notfound THEN
The cursor meets %NOTFOUND when the number of records found is less than the LIMIT value. So if the last fetch is not exactly 5000 those records won't be processed.
It's a common gotcha for people using BULK COLLECT ... LIMIT for the first time. The solution is to change the exit condition to
EXIT when base_tab.count() = 0;
"I need to ensure, that the base_table_cur is not empty and exit if it is. I'l get an error if it is empty"
The new_data_cur cursor includes the table which is selected in base_table_cur cursor. So I don't think you need the two loops. You need a simple test to see whether the first cursor returns something, then just loop round the second cursor.
I'm not entirely clear on your logic, so I have changed as little as possible to demonstrate the sort of structure I think you need. However, the UPDATE statement looks a little odd, so you may still run into issues.
OPEN base_table_cur;
FETCH base_table_cur BULK COLLECT INTO base_tab LIMIT 1;
if base_table_tab.count = 0 then
DBMS_OUTPUT.PUT_LINE('Nothing to update. Exiting.');
else
OPEN new_data_cur;
LOOP
FETCH new_data_cur BULK COLLECT INTO new_data_tab LIMIT 5000;
exit when new_data_tab.count() = 0;
FOR i IN base_tab.first..base_tab.last LOOP
row_id_tab(i) := new_data_tab(i).rowid;
update_cols_tab(i).TARGET_COLUMN := new_data_tab(i).TARGET_COLUMN;
END LOOP;
FORALL i IN base_tab.first..base_tab.last SAVE EXCEPTIONS
UPDATE (SELECT TARGET_COLUMN FROM TARGET_TABLE)
SET row = update_cols_tab(i)
WHERE ROWID = row_id_tab(i);
END LOOP;
CLOSE new_data_cur;
end if;
COMMIT;
CLOSE base_table_cur;

Bulk inserting in Oracle PL/SQL

I have around 5 million of records which needs to be copied from table of one schema to table of another schema(in the same database). I have prepared a script but it gives me the below error.
ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define
Following is my script
DECLARE
TYPE tA IS TABLE OF varchar2(10) INDEX BY PLS_INTEGER;
TYPE tB IS TABLE OF SchemaA.TableA.band%TYPE INDEX BY PLS_INTEGER;
TYPE tD IS TABLE OF SchemaA.TableA.start_date%TYPE INDEX BY PLS_INTEGER;
TYPE tE IS TABLE OF SchemaA.TableA.end_date%TYPE INDEX BY PLS_INTEGER;
rA tA;
rB tB;
rD tD;
rE tE;
f number :=0;
BEGIN
SELECT col1||col2||col3 as main_col, band, effective_start_date as start_date, effective_end_date as end_date
BULK COLLECT INTO rA, rB, rD, rE
FROM schemab.tableb;
FORALL i IN rA.FIRST..rE.LAST
insert into SchemaA.TableA(main_col, BAND, user_type, START_DATE, END_DATE, roll_no)
values(rA(i), rB(i), 'C', rD(i), rE(i), 71);
f:=f+1;
if (f=10000) then
commit;
end if;
end;
Could you please help me in finding where the error lies?
Why not a simple
insert into SchemaA.TableA (main_col, BAND, user_type, START_DATE, END_DATE, roll_no)
SELECT col1||col2||col3 as main_col, band, 'C', effective_start_date, effective_end_date, 71
FROM schemab.tableb;
This
f:=f+1;
if (f=10000) then
commit;
end if;
does not make any sense. f becomes 1 - that's it. f=10000 will never be true, thus you don't make a COMMIT.
Following script worked for me and i was able to load around 5 millions of data within 15 minutes.
ALTER SESSION ENABLE PARALLEL DML
/
DECLARE
cursor c_p1 is
SELECT col1||col2||col3 as main_col, band, effective_start_date as start_date, effective_end_date as end_date
FROM schemab.tableb;
TYPE TY_P1_FULL is table of c_p1%rowtype
index by pls_integer;
v_P1_FULL TY_P1_FULL;
v_seq_num number;
BEGIN
open c_p1;
loop
fetch c_p1 BULK COLLECT INTO v_P1_FULL LIMIT 10000;
exit when v_P1_FULL.count = 0;
FOR i IN 1..v_P1_FULL.COUNT loop
INSERT /*+ APPEND */ INTO schemaA.tableA VALUES (v_P1_FULL(i));
end loop;
commit;
end loop;
close c_P1;
dbms_output.put_line('Load completed');
end;
-- Disable parallel mode for this session
ALTER SESSION DISABLE PARALLEL DML
/
ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define
You get that error because you have a literal in the VALUES clause of the INSERT. The FORALL expects everything to be bind to an array.
Your program has a massive problem, literally. You have no LIMIT on the BULK COLLECT clause, so that's going to try to load all five million records from TableB into your collections. That will blow your session's memory limit.
The point of using BULK COLLECT and FORALL is to bite off chunks of a bigger data set and process it in batches. For that you need a loop. The loop has no FOR condition: instead test whether the fetch returned anything and exit when the array has zero records.
DECLARE
TYPE recA IS RECORD (
main_col SchemaA.TableA.main_col%TYPE
, band SchemaA.TableA.band%TYPE
, start_date date
, end_date date
, roll_ni number);
TYPE recsA is table of recA
nt_a recsA;
f number :=0;
CURSOR cur_b is
SELECT col1||col2||col3 as main_col,
band,
effective_start_date as start_date,
effective_end_date as end_date ,
71 as roll_no
FROM schemab.tableb;
BEGIN
open cur_b;
loop
fetch curb_b bulk collect into nt_a limit 1000;
exit when nt_a.count() = 0;
FORALL i IN rA.FIRST..rE.LAST
insert into SchemaA.TableA(main_col, BAND, user_type, START_DATE, END_DATE, roll_no)
values nt_a(i);
f := f + sql%rowcount;
if (f > = 10000) then
commit;
f := 0;
end if;
end loop;
commit;
close cur_b;
end;
Please note that issuing commits inside a loop is contraindicated. You lay yourself open to runtime errors such as ORA-01002 and ORA-01555. If your program does crash half-way through you will have great difficulty in resuming it without problems. By all means persist if you have problems with UNDO tablespace, but the correct answer is to get the DBA to enlarge the UNDO tablespace not weaken your code.
"i am using bulk insert because it gives better performance"
It is true that BULK COLLECT and FORALL ... INSERT is more performative than a CURSOR FOR loop with row-by-row single inserts. It is not more efficient than a pure SQL INSERT INTO ... SELECT. The value of the construct is that it allows us to manipulate the contents of the array before inserting it. This is handle if we have complex business rules which can only be applied programmatically.
Please try after changing first 2 line of your code with below:
DECLARE
TYPE tA IS TABLE OF SchemaA.TableA.main_col%TYPE INDEX BY PLS_INTEGER;
...
...
This may be because of data type/length mismatch. In declaration section you have missed to declare one to inherit type from table.
Also as mentioned, f logic for commit will not do the magic for you. Better you should use LIMIT with BULL COLLECT

ORACLE PL/SQL: The lost last chunk

Please help me to solve this ORACLE PL/SQL problem.
I have the following code:
BEGIN
DECLARE
P_COMMIT_STEP NUMBER := 10000; -- Commit every 10000 record copied
V_QUERY VARCHAR2 (4000) := NULL;
MY_CURSOR SYS_REFCURSOR;
TYPE FETCH_ARRAY IS TABLE OF MY_TABLE_BACKUP%ROWTYPE;
S_ARRAY FETCH_ARRAY;
BEGIN
V_QUERY := 'SELECT * FROM MY_TABLE_BACKUP';
OPEN MY_CURSOR FOR V_QUERY;
LOOP
FETCH MY_CURSOR
BULK COLLECT INTO S_ARRAY
LIMIT P_COMMIT_STEP;
FORALL I IN 1 .. S_ARRAY.COUNT
INSERT INTO MY_TABLE_BIS /*+ APPEND */
VALUES S_ARRAY (I);
COMMIT;
EXIT WHEN MY_CURSOR%NOTFOUND;
END LOOP;
CLOSE MY_CURSOR;
COMMIT;
END;
END;
Since the commit step is 10000, the copy works for a multiple of 10000 record.
So, if the original table has 1000010 records, only 1000000 records will be copied.
Where is the error?
The code seems correct, in my opinion.
Thank you very much for considering my request.
As noted in this article, you shouldn't rely on %NOTFOUND with bulk collect and forall. Check how many rows were fetched:
LOOP
FETCH MY_CURSOR
BULK COLLECT INTO S_ARRAY
LIMIT P_COMMIT_STEP;
FORALL I IN 1 .. S_ARRAY.COUNT
INSERT INTO MY_TABLE_BIS /*+ APPEND */
VALUES S_ARRAY (I);
COMMIT;
EXIT WHEN S_ARRAY.COUNT < P_COMMIT_STEP;
END LOOP;
Your first thousand iterations will get 10000 rows, so the count will equal your limit for all of those, and it will continue. The next one will get only 10 rows, so it will exit after the forall.
It should still work with the %NOTFOUND check at the end of the loop - the article is really talking about it being an issue if you use it to exit before processing the partial batch - and the documentation shows that pattern; but in some circumstances it seems not to. Having said that, I can't reproduce the issue from your code in 11.2.0.3.
Incidentally, committing inside a loop is generally a bad idea unless you've made the block restartable.
HI small modification to your Forall loop
FORALL I IN S_ARRAY.first .. S_ARRAY.last
INSERT INTO MY_TABLE_BIS /*+ APPEND */
VALUES S_ARRAY (I);
I think it will work for you.

Oracle INSERT performance for large chunks of data

I am developing stored procedure for Oracle 10g and I am hitting pretty heavy performance bottle neck while trying to pass list of about 2-3k items into procedure. Here's my code:
TYPE ty_id_list
AS TABLE OF NUMBER(11);
----------------------------------------------------------
PROCEDURE sp_performance_test (
p_idsCollection IN schema.ty_id_list )
AS
TYPE type_numeric_table IS TABLE OF NUMBER(11) INDEX BY BINARY_INTEGER;
l_ids type_numeric_table;
data type_numeric_table;
empty type_numeric_table;
BEGIN
EXECUTE IMMEDIATE
'ALTER TABLE schema.T_TEST_TABLE NOLOGGING';
COMMIT;
SELECT COLUMN_VALUE BULK COLLECT INTO l_ids
FROM TABLE(p_idsCollection);
FOR j IN 1 .. l_ids.COUNT LOOP
data(data.count+1) := l_ids(j);
IF(MOD(data.COUNT,500) = 0 ) THEN
FORALL i IN 1 .. data.COUNT
INSERT INTO schema.T_TEST_TABLE (REF_ID, ACTIVE)
VALUES (data(i), 'Y');
data := empty;
END IF;
END LOOP;
IF(data.count IS NOT NULL) THEN
FORALL i IN 1 .. data.COUNT
INSERT INTO schema.T_TEST_TABLE (REF_ID, ACTIVE)
VALUES (data(i), 'Y');
END IF;
COMMIT;
EXECUTE IMMEDIATE
'ALTER TABLE schema.T_TEST_TABLE LOGGING';
COMMIT;
END sp_performance_test;
So the thing that adds up to the process quite drastically seems to be this part: data(data.count+1) := l_ids(j); If I skip getting element from the collection and change this line to data(data.count+1) := j;, procedure execution time will be 3-4 times faster (from over 30s to 8-9s for 3k items) - but this logic obviously is not the one i want.
Can You guys give me a hint where could I improve my code to get better performance on inserting data? If any improvements can be done really.
Thanks,
Przemek
I don't follow your logic.
You accept a collection. You copy it to another collection:
SELECT COLUMN_VALUE BULK COLLECT INTO l_ids
FROM TABLE(p_idsCollection);
And then you copy it once again, in a loop:
FOR j IN 1 .. l_ids.COUNT LOOP
data(data.count+1) := l_ids(j);
And only after that you manage to perform your 500-row-chunk bulk insert. What is wrong with bulk inserting p_idsCollection immediately?
p.s. You don't need commits after 'ALTER TABLE', ddl statements issue them implicitly.
that whole block after the DDL can be rewritten as
insert into schema.T_TEST_TABLE (REF_ID, ACTIVE)
select COLUMN_VALUE, 'Y' FROM TABLE(p_idsCollection);
You can also add hint into insert operation.
Insert /*+ append */ into tab (...) values (...)
It's change oracle work logic and it will work faster.
http://www.dba-oracle.com/t_insert_tuning.htm

Resources