Executing a query within forall - oracle

I have a set of queries that is present Queries.
Now i want to execute this by using forall in the way shown below:
open EX01_CURSOR;
LOOP
FETCH EX01_CURSOR BULK COLLECT INTO Queries LIMIT 50000;
EXIT WHEN Queries.COUNT = 0;
FORALL i IN 1 .. Queries.count
**execute immediate Queries(i).update_query;**
commit;
END LOOP;
close EX01_CURSOR;
I want to execute my query within the forall block.
I'm getting compilation errors as shown below:
DML statement without BULK In-BIND cannot be used inside FORALL
can somebody please help?

FORALL isn't really a LOOP or iterator, but is a BULK DML operation. It doesn't run an operation for each entry in the collection, entry-by-entry, but runs one DML operation for all the entries in the collection. You can't run Dynamic SQL in BULK and can't iterate. To iterate and run the EXECUTE IMMEDIATE, you will need use a LOOP (or to take another approach).
Here's an example. It still uses BULK COLLECT to control commit size, but does the Dynamic SQL in a FOR LOOP. (Note: not efficient to do this volume of Dynamic SQL. Just an example keeping your original collection size)
Set up test data:
CREATE TABLE TEST_TABLE (
TEST_ID NUMBER
);
INSERT INTO TEST_TABLE SELECT LEVEL
FROM DUAL
CONNECT BY LEVEL <= 50001;
Initial Data:
SELECT * FROM TEST_TABLE ORDER BY 1 DESC FETCH FIRST 5 ROWS ONLY;
TEST_ID
50001
50000
49999
49998
49997
Then run the block. This does not use FORALL BULK DML. It uses a FOR LOOP instead:
DECLARE
CURSOR EX01_CURSOR IS (
SELECT 'UPDATE TEST_TABLE SET TEST_TABLE.TEST_ID = TEST_TABLE.TEST_ID + 100000 WHERE TEST_TABLE.TEST_ID = ' ||
TEST_TABLE.TEST_ID AS UPDATE_QUERY
FROM TEST_TABLE);
TYPE EX01_CUR_TYPE IS TABLE OF EX01_CURSOR%ROWTYPE;
QUERIES EX01_CUR_TYPE;
BEGIN
QUERIES := EX01_CUR_TYPE();
OPEN EX01_CURSOR;
LOOP
FETCH EX01_CURSOR BULK COLLECT INTO QUERIES LIMIT 50000;
EXIT WHEN QUERIES.COUNT = 0;
FOR UPDATE_POINTER IN 1..QUERIES.COUNT
LOOP
EXECUTE IMMEDIATE QUERIES(UPDATE_POINTER).UPDATE_QUERY;
END LOOP;
COMMIT;
END LOOP;
CLOSE EX01_CURSOR;
END;
/
PL/SQL procedure successfully completed.
Result:
SELECT * FROM TEST_TABLE ORDER BY 1 DESC FETCH FIRST 5 ROWS ONLY;
TEST_ID
150001
150000
149999
149998
149997

Related

How to write procedure to copy data from one table to another table

Hello I have this situation
I have a table with 400 millions of records I want to migrate some data to another table for get better performance.
I know that the process could be slow and heavy and I want to execute something like this
INSERT INTO TABLLE_2 SELECT NAME,TAX_ID,PROD_CODE FROM TABLE_1;
But I want this in a procedure that iterates the table from 50000 to 50000 records.
I saw this but it DELETE the rows and the target is SQLServer and not Oracle
WHILE 1 = 1
BEGIN
DELETE TOP (50000) FROM DBO.VENTAS
OUTPUT
Deleted.AccountNumber,
Deleted.BillToAddressID,
Deleted.CustomerID,
Deleted.OrderDate,
Deleted.SalesOrderNumber,
Deleted.TotalDue
INTO DATOS_HISTORY.dbo.VENTAS
WHERE OrderDate >= '20130101'
and OrderDate < '20140101'
IF ##ROWCOUNT = 0 BREAK;
END;
If it's just for a one-time migration, don't paginate/iterate at all. Just use a pdml append insert-select:
BEGIN
EXECUTE IMMEDIATE 'alter session enable parallel dml';
INSERT /*+ parallel(8) nologging append */ INTO table_2
SELECT name,tax_id,prod_code from table_1;
COMMIT;
EXECUTE IMMEDIATE 'alter session disable parallel dml';
END;
Or if the table doesn't yet exist, CTAS it to be even simpler:
CREATE OR REPLACE TABLE table_2 PARALLEL (DEGREE 8) NOLOGGING AS
SELECT name,tax_id,prod_code FROM table_1;
If you absolutely must iterate due to other business needs you didn't mention, you can use PL/SQL for that as well:
DECLARE
CURSOR cur_test
IS
SELECT name,tax_id,prod_code
FROM table_1;
TYPE test_tabtype IS TABLE OF cur_test%ROWTYPE;
tab_test test_tabtype;
var_limit integer := 50000;
BEGIN
OPEN cur_test;
FETCH cur_test BULK COLLECT INTO tab_test LIMIT var_limit;
LOOP
-- do whatever else you need to do
FORALL i IN tab_test.FIRST .. tab_test.LAST
INSERT INTO table_2
(name,tax_id,prod_code)
VALUES (tab_test(i).name, tab_test(i).tax_id, tab_test(i).prod_code);
COMMIT;
EXIT WHEN tab_test.COUNT < var_limit;
FETCH cur_test BULK COLLECT INTO tab_test LIMIT var_limit;
END LOOP;
CLOSE cur_test;
END;
That won't be nearly as fast, though. If you need even faster, you can create a SQL object and nested table type instead of the PL/SQL types you see here and then you can do a normal INSERT /*+ APPEND */ SELECT ... statement that will use direct path. But honestly, I doubt you really need iteration at all..

SQL%ROWCOUNT is not providing desired outcome with dynamic SQL Having BULK COLLECT and FORALL in Oracle PL/SQL

SQL%ROWCOUNT is returning the count considered(10) for the run, not the exact number of records updated. Expectation is that SQL%ROWCOUNT should provide the actual number of records updated . Please suggest me how to achieve the task.
Code which triggers dynamic SQL
FORALL indx IN 1 .. l_account_data.COUNT --assume 10 as count
SAVE EXCEPTIONS
EXECUTE IMMEDIATE dynamic_sql_query USING l_account_data (indx);
DBMS_OUTPUT.put_line ('Successful UPDATE of '|| TO_CHAR (SQL%ROWCOUNT) || ' record');
COMMIT;
dynamic_sql_query
BEGIN
SELECT clmn_x, clmn_y
BULK COLLECT INTO l_subscr_data
FROM table_x e, table_y c
WHERE c.ref_id = :account_no AND e.account_no = c.account_no;
FORALL indx IN 1 .. l_subscr_data.COUNT
UPDATE table_z ciem --this update will update multiple records for each account
SET ciem.ext_id = ciem.sub_no || ROWID
WHERE ciem.sub_no = l_subscr_data (indx).clmn_x
AND ciem.subscr_no_resets = l_subscr_data (indx).clmn_y
AND ciem.status IN (1,2);
END;
Your outer execute immediate call isn't aware of what is happening inside the dynamic SQL; it doesn't know what it's doing, or how many rows it may or may not have affected.
To get an accurate count you would need to modify your dynamic statement to add something like:
FOR indx IN 1 .. l_subscr_data.COUNT LOOP
:total_count := :total_count + coalesce(SQL%BULK_ROWCOUNT(indx), 0);
END LOOP;
and change your outer call to (a) pass an extra IN OUT bind variable to track the total count, and (b) use a FOR LOOP rather than FORALL, because that only seems to retain the value after the first dynamic call (not sure if that's documented, or a bug). So something like:
...
l_total_count number := 0;
...
FOR indx IN 1 .. l_account_data.COUNT LOOP
EXECUTE IMMEDIATE dynamic_sql_query
USING l_account_data (indx), in out l_total_count;
END LOOP;
DBMS_OUTPUT.put_line ('Successful UPDATE of '|| TO_CHAR (l_total_count) || ' record');
db<>fiddle demo with made-up data.

getting entire data into collections using oracle PLSQL Bulk collect with limit clause

I have one table called EMP with 140000 rows and I need , to keep entire data into collection .How to extend collection and load entire data into collection using "BULK COLLECT ..LIMIT" clause feature.
The below logic not providing required result , since data has been overridden with new records.Please suggest me the logic.
DECLARE
CURSOR c_get_employee IS
SELECT empno,
ename,
deptno,
sal
FROM emp;
TYPE t_employee
IS TABLE OF c_get_employee%ROWTYPE INDEX BY inary_integer;
l_employee T_EMPLOYEE;
BEGIN
OPEN c_get_employee;
LOOP
FETCH c_get_employee bulk collect INTO l_employee limit 300;
EXIT WHEN l_employee.count = 0;
END LOOP;
CLOSE c_get_employee;
FOR i IN 1..l_employee.count LOOP
dbms_output.Put_line (L_employee(i).ename
||'<-->'
||L_employee(i).sal);
END LOOP;
EXCEPTION
WHEN OTHERS THEN
dbms_output.Put_line ('Unexpected error :- '
|| SQLERRM);
END;
You are exiting the loop too early. You need stop the fetch loop after the for loop and close cursor after that.
Also, as #APC pointed out, the exit condition should use count of fetched results instead of NOTFOUND on cursor. Otherwise, if the last fetch has lesser records than the fetch size, the NOTFOUND will be true and loop terminates incorrectly.
Try this:
DECLARE
CURSOR c_get_employee IS
SELECT empno,
ename,
deptno,
sal
FROM emp;
TYPE t_employee
IS TABLE OF c_get_employee%ROWTYPE INDEX BY binary_integer;
l_employee T_EMPLOYEE;
BEGIN
OPEN c_get_employee;
LOOP
FETCH c_get_employee bulk collect INTO l_employee limit 3;
EXIT WHEN l_employee.count = 0;
FOR i IN 1..l_employee.count LOOP
dbms_output.Put_line (L_employee(i).ename
||'<-->'
||L_employee(i).sal);
END LOOP;
END LOOP;
CLOSE c_get_employee;
EXCEPTION
WHEN OTHERS THEN
dbms_output.Put_line ('Unexpected error :- '
|| SQLERRM);
END;
The below logic is not giving required result
Wild guess: you're only getting twelve rows. This is a familiar gotcha with LIMIT clause. This line is the problem:
EXIT WHEN c_get_employee%NOTFOUND;
You have fourteen records in EMP: The limit of 3 means you collect four sets of records. The last FETCH only collects 2 records. PL/SQL interprets this as NOTFOUND. The solution is to check the size of the collection:
EXIT WHEN l_employee.count() = 0;
I want to load entire data into collection and close the cursor.After that I want to open collection and use data for business logic
That's not how BULK COLLECT ... LIMIT works. The point of the LIMIT clause is to, er, limit the number of records fetched at a time. We need to do this when the queried data is too big to handle in a single fetch. PL/SQL collections are memory structures held in the session's allocation of memory: if they get too big they will blow the PGA. (Definition of "too big" will depend on how your DBA has configured the PGA.)
So, if you have a small result set, ditch the LIMIT clause and populate the collection in a single fetch. But if you have sufficient data to require the LIMIT clause you need to include the business logic loop inside the fetch loop.

Oracle PL/SQL how do you output how many inserts have been made in a FORALL statement

What's the best way of getting and outputting how many rows have been inserted in the FORALL statement I have below. I've seen the SQL%BULK_ROWCOUNT but I'm not sure how that would work in the below statement.
is it
DBMS_OUTPUT.('rows inserted '||SQL%BULK_ROWCOUNT||'');
Does the above need to go in another FORALL statement? For the code below how would I achieve this?
DECLARE
TYPE t_arc_act_plus_trigger1 IS TABLE OF arc_act_plus_triggers1%ROWTYPE;
v_arc_act_plus_triggers1 t_arc_act_plus_trigger1;
CURSOR c_arc_act_plus_triggers1 IS
SELECT /*+ PARALLEL */ apt.*
FROM act_plus_triggers1 apt
WHERE NOT EXISTS
(SELECT 1
FROM act_plus_triggers_copy1 aptc
WHERE aptc.surr_id = apt.surr_id)
AND apt.status IN ('EXT', 'EXP');
BEGIN
OPEN c_arc_act_plus_triggers1;
LOOP
FETCH c_arc_act_plus_triggers1 BULK COLLECT INTO v_arc_act_plus_triggers1 LIMIT 10000; -- limit to 10k to avoid out of memory
FORALL i IN 1..v_arc_act_plus_triggers1.COUNT
INSERT /*+ APPEND_VALUES */ INTO arc_act_plus_triggers1 values v_arc_act_plus_triggers1(i);
Com0932.get_parameter ('ACT_ARCHIVE_TRIGGER_STOP_YN',l_STOP_PROGRAM_YN);
IF l_STOP_PROGRAM_YN = 'Y' THEN
p_location('insert_into_arc_act_plus - STOP_PROGRAM_YN flag = '||l_STOP_PROGRAM_YN||' so ROLLBACK');
ROLLBACK;
EXIT;
END IF;
-- **************************************************
-- Output how many records have been inserted here???
-- **************************************************
-- commit after every 10000 records into arc_act_plus_triggers1
COMMIT;
EXIT WHEN c_arc_act_plus_triggers1%NOTFOUND;
END LOOP;
CLOSE c_arc_act_plus_triggers1;
END;
I haven't checked as I have nothing to test against so please forgive any 'missing semi-colon type errors' and I'm afraid I'm not in a position to performance check this.
Your code seems to select which rows to insert to the archive table based on there non-existence in the archive. Therefore simply use an INSERT based on a SELECT limited by a suitable ROWNUM value. Once you commit then the next time round the loop it wont try getting already archived rows as you just committed them.
I think this should be as quick if not quicker than bulkifying the inserts with the advantage that its simpler - Occams Razor and all that.
DECLARE
l_commit_count NUMBER := 10000;
l_rows_copied NUMBER := 0;
BEGIN
DBMS_OUTPUT.PUT_LINE('Started at '||TO_DATE(SYSDATE, 'DD_MON_YYY HH24:MI:SS');
LOOP
INSERT /*+APPEND */
INTO c_arc_act_plus_triggers1
SELECT /*+ PARALLEL */ apt.*
FROM act_plus_triggers1 apt
WHERE NOT EXISTS
(SELECT 1
FROM act_plus_triggers_copy1 aptc
WHERE aptc.surr_id = apt.surr_id)
AND apt.status IN ('EXT', 'EXP')
AND rownum < l_commit_count;
COMMIT;
l_rows := l_rows + SQL%ROWCOUNT;
EXIT WHEN SQL%ROWCOUNT < 1;
END LOOP
DBMS_OUTPUT.PUT_LINE('Finished at '||TO_DATE(SYSDATE, 'DD_MON_YYY HH24:MI:SS');
DBMS_OUTPUT.PUT_LINE(TO_CHAR(l_rows)||' rows copied to the archive table');
END;

Oracle INSERT performance for large chunks of data

I am developing stored procedure for Oracle 10g and I am hitting pretty heavy performance bottle neck while trying to pass list of about 2-3k items into procedure. Here's my code:
TYPE ty_id_list
AS TABLE OF NUMBER(11);
----------------------------------------------------------
PROCEDURE sp_performance_test (
p_idsCollection IN schema.ty_id_list )
AS
TYPE type_numeric_table IS TABLE OF NUMBER(11) INDEX BY BINARY_INTEGER;
l_ids type_numeric_table;
data type_numeric_table;
empty type_numeric_table;
BEGIN
EXECUTE IMMEDIATE
'ALTER TABLE schema.T_TEST_TABLE NOLOGGING';
COMMIT;
SELECT COLUMN_VALUE BULK COLLECT INTO l_ids
FROM TABLE(p_idsCollection);
FOR j IN 1 .. l_ids.COUNT LOOP
data(data.count+1) := l_ids(j);
IF(MOD(data.COUNT,500) = 0 ) THEN
FORALL i IN 1 .. data.COUNT
INSERT INTO schema.T_TEST_TABLE (REF_ID, ACTIVE)
VALUES (data(i), 'Y');
data := empty;
END IF;
END LOOP;
IF(data.count IS NOT NULL) THEN
FORALL i IN 1 .. data.COUNT
INSERT INTO schema.T_TEST_TABLE (REF_ID, ACTIVE)
VALUES (data(i), 'Y');
END IF;
COMMIT;
EXECUTE IMMEDIATE
'ALTER TABLE schema.T_TEST_TABLE LOGGING';
COMMIT;
END sp_performance_test;
So the thing that adds up to the process quite drastically seems to be this part: data(data.count+1) := l_ids(j); If I skip getting element from the collection and change this line to data(data.count+1) := j;, procedure execution time will be 3-4 times faster (from over 30s to 8-9s for 3k items) - but this logic obviously is not the one i want.
Can You guys give me a hint where could I improve my code to get better performance on inserting data? If any improvements can be done really.
Thanks,
Przemek
I don't follow your logic.
You accept a collection. You copy it to another collection:
SELECT COLUMN_VALUE BULK COLLECT INTO l_ids
FROM TABLE(p_idsCollection);
And then you copy it once again, in a loop:
FOR j IN 1 .. l_ids.COUNT LOOP
data(data.count+1) := l_ids(j);
And only after that you manage to perform your 500-row-chunk bulk insert. What is wrong with bulk inserting p_idsCollection immediately?
p.s. You don't need commits after 'ALTER TABLE', ddl statements issue them implicitly.
that whole block after the DDL can be rewritten as
insert into schema.T_TEST_TABLE (REF_ID, ACTIVE)
select COLUMN_VALUE, 'Y' FROM TABLE(p_idsCollection);
You can also add hint into insert operation.
Insert /*+ append */ into tab (...) values (...)
It's change oracle work logic and it will work faster.
http://www.dba-oracle.com/t_insert_tuning.htm

Resources