Please help me to solve this ORACLE PL/SQL problem.
I have the following code:
BEGIN
DECLARE
P_COMMIT_STEP NUMBER := 10000; -- Commit every 10000 record copied
V_QUERY VARCHAR2 (4000) := NULL;
MY_CURSOR SYS_REFCURSOR;
TYPE FETCH_ARRAY IS TABLE OF MY_TABLE_BACKUP%ROWTYPE;
S_ARRAY FETCH_ARRAY;
BEGIN
V_QUERY := 'SELECT * FROM MY_TABLE_BACKUP';
OPEN MY_CURSOR FOR V_QUERY;
LOOP
FETCH MY_CURSOR
BULK COLLECT INTO S_ARRAY
LIMIT P_COMMIT_STEP;
FORALL I IN 1 .. S_ARRAY.COUNT
INSERT INTO MY_TABLE_BIS /*+ APPEND */
VALUES S_ARRAY (I);
COMMIT;
EXIT WHEN MY_CURSOR%NOTFOUND;
END LOOP;
CLOSE MY_CURSOR;
COMMIT;
END;
END;
Since the commit step is 10000, the copy works for a multiple of 10000 record.
So, if the original table has 1000010 records, only 1000000 records will be copied.
Where is the error?
The code seems correct, in my opinion.
Thank you very much for considering my request.
As noted in this article, you shouldn't rely on %NOTFOUND with bulk collect and forall. Check how many rows were fetched:
LOOP
FETCH MY_CURSOR
BULK COLLECT INTO S_ARRAY
LIMIT P_COMMIT_STEP;
FORALL I IN 1 .. S_ARRAY.COUNT
INSERT INTO MY_TABLE_BIS /*+ APPEND */
VALUES S_ARRAY (I);
COMMIT;
EXIT WHEN S_ARRAY.COUNT < P_COMMIT_STEP;
END LOOP;
Your first thousand iterations will get 10000 rows, so the count will equal your limit for all of those, and it will continue. The next one will get only 10 rows, so it will exit after the forall.
It should still work with the %NOTFOUND check at the end of the loop - the article is really talking about it being an issue if you use it to exit before processing the partial batch - and the documentation shows that pattern; but in some circumstances it seems not to. Having said that, I can't reproduce the issue from your code in 11.2.0.3.
Incidentally, committing inside a loop is generally a bad idea unless you've made the block restartable.
HI small modification to your Forall loop
FORALL I IN S_ARRAY.first .. S_ARRAY.last
INSERT INTO MY_TABLE_BIS /*+ APPEND */
VALUES S_ARRAY (I);
I think it will work for you.
Related
What's the best way of getting and outputting how many rows have been inserted in the FORALL statement I have below. I've seen the SQL%BULK_ROWCOUNT but I'm not sure how that would work in the below statement.
is it
DBMS_OUTPUT.('rows inserted '||SQL%BULK_ROWCOUNT||'');
Does the above need to go in another FORALL statement? For the code below how would I achieve this?
DECLARE
TYPE t_arc_act_plus_trigger1 IS TABLE OF arc_act_plus_triggers1%ROWTYPE;
v_arc_act_plus_triggers1 t_arc_act_plus_trigger1;
CURSOR c_arc_act_plus_triggers1 IS
SELECT /*+ PARALLEL */ apt.*
FROM act_plus_triggers1 apt
WHERE NOT EXISTS
(SELECT 1
FROM act_plus_triggers_copy1 aptc
WHERE aptc.surr_id = apt.surr_id)
AND apt.status IN ('EXT', 'EXP');
BEGIN
OPEN c_arc_act_plus_triggers1;
LOOP
FETCH c_arc_act_plus_triggers1 BULK COLLECT INTO v_arc_act_plus_triggers1 LIMIT 10000; -- limit to 10k to avoid out of memory
FORALL i IN 1..v_arc_act_plus_triggers1.COUNT
INSERT /*+ APPEND_VALUES */ INTO arc_act_plus_triggers1 values v_arc_act_plus_triggers1(i);
Com0932.get_parameter ('ACT_ARCHIVE_TRIGGER_STOP_YN',l_STOP_PROGRAM_YN);
IF l_STOP_PROGRAM_YN = 'Y' THEN
p_location('insert_into_arc_act_plus - STOP_PROGRAM_YN flag = '||l_STOP_PROGRAM_YN||' so ROLLBACK');
ROLLBACK;
EXIT;
END IF;
-- **************************************************
-- Output how many records have been inserted here???
-- **************************************************
-- commit after every 10000 records into arc_act_plus_triggers1
COMMIT;
EXIT WHEN c_arc_act_plus_triggers1%NOTFOUND;
END LOOP;
CLOSE c_arc_act_plus_triggers1;
END;
I haven't checked as I have nothing to test against so please forgive any 'missing semi-colon type errors' and I'm afraid I'm not in a position to performance check this.
Your code seems to select which rows to insert to the archive table based on there non-existence in the archive. Therefore simply use an INSERT based on a SELECT limited by a suitable ROWNUM value. Once you commit then the next time round the loop it wont try getting already archived rows as you just committed them.
I think this should be as quick if not quicker than bulkifying the inserts with the advantage that its simpler - Occams Razor and all that.
DECLARE
l_commit_count NUMBER := 10000;
l_rows_copied NUMBER := 0;
BEGIN
DBMS_OUTPUT.PUT_LINE('Started at '||TO_DATE(SYSDATE, 'DD_MON_YYY HH24:MI:SS');
LOOP
INSERT /*+APPEND */
INTO c_arc_act_plus_triggers1
SELECT /*+ PARALLEL */ apt.*
FROM act_plus_triggers1 apt
WHERE NOT EXISTS
(SELECT 1
FROM act_plus_triggers_copy1 aptc
WHERE aptc.surr_id = apt.surr_id)
AND apt.status IN ('EXT', 'EXP')
AND rownum < l_commit_count;
COMMIT;
l_rows := l_rows + SQL%ROWCOUNT;
EXIT WHEN SQL%ROWCOUNT < 1;
END LOOP
DBMS_OUTPUT.PUT_LINE('Finished at '||TO_DATE(SYSDATE, 'DD_MON_YYY HH24:MI:SS');
DBMS_OUTPUT.PUT_LINE(TO_CHAR(l_rows)||' rows copied to the archive table');
END;
I have to write a dynamic sql cursor where there are several possibilities in which the select query will be generated. Hence I am chosing dynamic and I am Using DBMS_SQL package to dynamically create a cursor and dynamically fetch the data.
However , Result set is going to be huge . around 11GB (there are 2.4 million records and the select statement will be approx 80 cols long assumning about 50Byte varchar per column)
Hence I cannot open the cursor at once . I want to know if there is a feature wherein i can fetch the data from the curosr keeping the curosr open for Blocks of say 1000 records at time(I will have to do this dynamically)
Please find the code attached which only fetches and prints the value of the columns (one sample case ) I want to use bul collect here \
Thanks
---------------code sample--------------------------------------
--create or replace type TY_DIMDEAL AS TABLE OF VARCHAR2(50) ;
create or replace procedure TEST_PROC (po_recordset out sys_refcursor)
as
v_col_cnt INTEGER;
v_ind NUMBER;
rec_tab DBMS_SQL.desc_tab;
v_cursor NUMBER;
lvar_output number:=0;
lvar_output1 varchar2(100);
lvar_output3 varchar2(100);
lvar_output2 varchar2(100);
LVAR_TY_DIMDEAL TY_DIMDEAL;
lvarcol varchar2(100);
begin
--
LVAR_TY_DIMDEAL := TY_DIMDEAL();
lvar_output1 := '';
v_cursor := dbms_sql.open_cursor;
dbms_sql.parse(v_cursor, 'select to_char(Field1) , to_char(fiel2) , to_char(field3) from table,table2 ', dbms_sql.native);
dbms_sql.describe_columns(v_cursor, v_col_cnt, rec_tab);
FOR v_pos in 1..rec_tab.LAST LOOP
LVAR_TY_DIMDEAL.EXTEND();
DBMS_SQL.define_column( v_cursor, v_pos ,LVAR_TY_DIMDEAL(v_pos),20);
END LOOP;
-- DBMS_SQL.define_column( v_cursor, 1 ,lvar_output1,20);
--DBMS_SQL.define_column( v_cursor, 2 ,lvar_output2,20);
--DBMS_SQL.define_column( v_cursor, 3 ,lvar_output3,20);
v_ind := dbms_sql.execute( v_cursor );
LOOP
v_ind := DBMS_SQL.FETCH_ROWS( v_cursor );
EXIT WHEN v_ind = 0;
lvar_output := lvar_output+1;
dbms_output.put_line ('row number '||lvar_output) ;
FOR v_col_seq IN 1 .. rec_tab.COUNT LOOP
LVAR_TY_DIMDEAL(v_col_seq):= '';
DBMS_SQL.COLUMN_VALUE( v_cursor, v_col_seq,LVAR_TY_DIMDEAL(v_col_seq));
dbms_output.put_line (LVAR_TY_DIMDEAL(v_col_seq));
END LOOP;
END LOOP;
end TEST_PROC;
Fetching data from a cursor in blocks of reasonable size, while keeping the cursor open, is one of PL/SQL Best Practices.
The above document (see Code 38 item) sketches an approach for when the select list is not known until runtime. Basically:
Define an appropriate type to fetch results into. Let's assume that all the returned columns will by of of type VARCHAR2:
-- inside DECLARE
Ty_FetchResults IS TABLE OF DBMS_SQL.VARCHAR2_TABLE;
lvar_results Ty_FetchResults;
Before each call to DBMS_SQL.FETCH_ROWS, call DBMS_SQL.DEFINE_ARRAY to enable batch fetching.
Call DBMS_SQL.FETCH_ROWS to fetch 1000 rows from the cursor.
Call DBMS_SQL.COLUMN_VALUE to copy the fetched data into your result array.
Process the results, record by record, in a FOR loop. Don't worry about the number of fetched records: if there are records to process, the FOR loop will run correctly; if the result array is empty, the FOR loop will not run.
Exit from the loop when the number of fetched records is less than the expected size.
Remember to DBMS_SQL.CLOSE the cursor.
Your loop body should look like this:
LOOP
FOR j IN 1..v_col_cnt LOOP
DBMS_SQL.DEFINE_ARRAY(v_cursor, j, lvar_results(j), 1000, 1);
END LOOP;
v_ind := DBMS_SQL.FETCH_ROWS(v_cursor);
FOR j IN 1..v_col_cnt LOOP
lvar_results(j).DELETE;
DBMS_SQL.COLUMN_VALUE(v_cursor, j, lvar_results(j));
END LOOP;
-- process the results, record by record
FOR i IN 1..lvar_results(1).COUNT LOOP
-- process a single record...
-- your logic goes here
END LOOP;
EXIT WHEN lvar_results(1).COUNT < 1000;
END LOOP;
-- don't forget: DBMS_CLOSE(v_cursor);
See also Doing SQL from PL/SQL: Best and Worst Practices.
LIMIT CLAUSE CAN COME TO RESCUE!
PL/SQL collections are essentially arrays in memory, so massive
collections can have a detrimental effect on system performance due to
the amount of memory they require. In some situations, it may be
necessary to split the data being processed into chunks to make the
code more memory-friendly. This “chunking” can be achieved using the
LIMIT clause of the BULK COLLECT syntax.
YOU CAN USE LIMIT CLAUSE AFTER BULK COLLECT INTO CLAUSE TO LIMIT YOUR RS.
AFTER YOU EXCEED TO LIMIT YOU CAN FETCH REMAINING ROWS.
SEE THIS ARTICLE
http://www.dba-oracle.com/plsql/t_plsql_limit_clause.htm
I am developing stored procedure for Oracle 10g and I am hitting pretty heavy performance bottle neck while trying to pass list of about 2-3k items into procedure. Here's my code:
TYPE ty_id_list
AS TABLE OF NUMBER(11);
----------------------------------------------------------
PROCEDURE sp_performance_test (
p_idsCollection IN schema.ty_id_list )
AS
TYPE type_numeric_table IS TABLE OF NUMBER(11) INDEX BY BINARY_INTEGER;
l_ids type_numeric_table;
data type_numeric_table;
empty type_numeric_table;
BEGIN
EXECUTE IMMEDIATE
'ALTER TABLE schema.T_TEST_TABLE NOLOGGING';
COMMIT;
SELECT COLUMN_VALUE BULK COLLECT INTO l_ids
FROM TABLE(p_idsCollection);
FOR j IN 1 .. l_ids.COUNT LOOP
data(data.count+1) := l_ids(j);
IF(MOD(data.COUNT,500) = 0 ) THEN
FORALL i IN 1 .. data.COUNT
INSERT INTO schema.T_TEST_TABLE (REF_ID, ACTIVE)
VALUES (data(i), 'Y');
data := empty;
END IF;
END LOOP;
IF(data.count IS NOT NULL) THEN
FORALL i IN 1 .. data.COUNT
INSERT INTO schema.T_TEST_TABLE (REF_ID, ACTIVE)
VALUES (data(i), 'Y');
END IF;
COMMIT;
EXECUTE IMMEDIATE
'ALTER TABLE schema.T_TEST_TABLE LOGGING';
COMMIT;
END sp_performance_test;
So the thing that adds up to the process quite drastically seems to be this part: data(data.count+1) := l_ids(j); If I skip getting element from the collection and change this line to data(data.count+1) := j;, procedure execution time will be 3-4 times faster (from over 30s to 8-9s for 3k items) - but this logic obviously is not the one i want.
Can You guys give me a hint where could I improve my code to get better performance on inserting data? If any improvements can be done really.
Thanks,
Przemek
I don't follow your logic.
You accept a collection. You copy it to another collection:
SELECT COLUMN_VALUE BULK COLLECT INTO l_ids
FROM TABLE(p_idsCollection);
And then you copy it once again, in a loop:
FOR j IN 1 .. l_ids.COUNT LOOP
data(data.count+1) := l_ids(j);
And only after that you manage to perform your 500-row-chunk bulk insert. What is wrong with bulk inserting p_idsCollection immediately?
p.s. You don't need commits after 'ALTER TABLE', ddl statements issue them implicitly.
that whole block after the DDL can be rewritten as
insert into schema.T_TEST_TABLE (REF_ID, ACTIVE)
select COLUMN_VALUE, 'Y' FROM TABLE(p_idsCollection);
You can also add hint into insert operation.
Insert /*+ append */ into tab (...) values (...)
It's change oracle work logic and it will work faster.
http://www.dba-oracle.com/t_insert_tuning.htm
I need to do some inserts in a cursor over about 300000 rows, this is however running slowly, any ideas on how i can make it run faster? Can i speed it up by batching the commits? So for example i would perform a commit after the 1000th row
DECLARE
CURSOR test_cursor IS
SELECT a from database.mytable
BEGIN
FOR curRow IN test_cursor LOOP
insert into tableb (testval)
values ('something');
commit;
END LOOP;
END;
300000 rows is not that many rows. Unless the rows are each extremely large, you should not commit in the middle of the batch.
Intermediate commits will only achieve:
additional overhead because each commit creates additional work,
loss of restartability in case of error (and loss of transactional integrity),
greater chance of running into ORA-1555
If your process is really a cursor with a single insert inside the loop, you should run a single statement:
BEGIN
INSERT INTO tableb (col1..coln) (SELECT col1..coln FROM database.mytable);
END;
If you still need extra performance, you could look into direct insert and parallel operation but It might be over-optimization with "only" 300k rows.
By far the single greatest optimization available to you is to think in term of sets instead of the traditional procedural approach that consists of batches of single row statements.
Or you can try this:
DECLARE
CURSOR test_cursor IS
SELECT col1 from table_a;
TYPE fetch_array IS TABLE OF test_cursor%ROWTYPE;
test_array fetch_array;
l_errors PLS_INTEGER;
l_dml_errors EXCEPTION;
PRAGMA EXCEPTION_INIT(l_dml_errors, -24381);
BEGIN
open test_cursor;
loop
fetch test_cursor bulk collect into test_array limit 10000;
forall i in 1..test_array.count save exceptions
insert into table_b(col1)
values(test_array(i).col1);
exit when test_cursor%notfound;
end loop;
close test_cursor;
commit;
EXCEPTION
WHEN l_dml_errors THEN
l_errors := SQL%BULK_EXCEPTIONS.COUNT;
dbms_output.put_line('Number of INSERT statements that failed: ' || l_errors);
FOR i IN 1 .. l_errors
LOOP
dbms_output.put_line('Error #' || i || ' at '|| 'iteration #' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
dbms_output.put_line('Error message is ' || SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
END LOOP;
END;
I would not recommend a cursor approach for this. I use append parallel hints for situations like this. Most of the time your query literally runs N times as fast where N is the parallel degree. It is occasionally a good idea to bypass disaster recovery with nologging / noarchivelog.
For truly large migrations (dozens to hundreds of GB), I've found it's a good idea to batch on a table's natural key (date, usually). Some small amount of state around it can let you cancel + resume the migration at will if necessary.
May Be This will help you please try this
DECLARE
i number;
CURSOR test_cursor IS
SELECT a from database.mytable
BEGIN
FOR curRow IN test_cursor LOOP
insert into tableb (testval)
values ('something');
i:i+1;
if mod(i,1000)=0 then
commit;
end if;
END LOOP;
commit;
END;
Here's my cursor:
CURSOR C1 IS SELECT * FROM MY_TABLE WHERE SALARY < 50000 FOR UPDATE;
I immediately open the cursor in order to lock these records for the duration of my procedure.
I want to raise an application error in the event that there are < 2 records in my cursor. Using the C1%ROWCOUNT property fails because it only counts the number which have been fetched thus far.
What is the best pattern for this use case? Do I need to create a dummy MY_TABLE%ROWTYPE variable and then loop through the cursor to fetch them out and keep a count, or is there a simpler way? If this is the way to do it, will fetching all rows in my cursor implicitly close it, thus unlocking those rows, or will it stay open until I explicitly close it even if I've fetched them all?
I need to make sure the cursor stays open for a variety of other tasks beyond this count.
NB: i just reread you question.. and you want to fail if there is ONLY 1 record..
i'll post a new update in a moment..
lets start here..
From Oracle® Database PL/SQL User's Guide and Reference
10g Release 2 (10.2)
Part Number B14261-01
reference
All rows are locked when you open the cursor, not as they are fetched. The rows are unlocked when you commit or roll back the transaction. Since the rows are no longer locked, you cannot fetch from a FOR UPDATE cursor after a commit.
so you do not need to worry about the records unlocking.
so try this..
declare
CURSOR mytable_cur IS SELECT * FROM MY_TABLE WHERE SALARY < 50000 FOR UPDATE;
TYPE mytable_tt IS TABLE OF mytable_cur %ROWTYPE
INDEX BY PLS_INTEGER;
l_my_table_recs mytable_tt;
l_totalcount NUMBER;
begin
OPEN mytable_cur ;
l_totalcount := 0;
LOOP
FETCH mytable_cur
BULK COLLECT INTO l_my_table_recs LIMIT 100;
l_totalcount := l_totalcount + NVL(l_my_table_recs.COUNT,0);
--this is the check for only 1 row..
EXIT WHEN l_totalcount < 2;
FOR indx IN 1 .. l_my_table_recs.COUNT
LOOP
--process each record.. via l_my_table_recs (indx)
END LOOP;
EXIT WHEN mytable_cur%NOTFOUND;
END LOOP;
CLOSE mytable_cur ;
end;
ALTERNATE ANSWER
I read you answer backwards to start and thought you wanted to exit if there was MORE then 1 row.. not exactly one.. so here is my previous answer.
2 simple ways to check for ONLY 1 record.
Option 1 - Explicit Fetchs
declare
CURSOR C1 IS SELECT * FROM MY_TABLE WHERE SALARY < 50000 FOR UPDATE;
l_my_table_rec C1%rowtype;
l_my_table_rec2 C1%rowtype;
begin
open C1;
fetch c1 into l_my_table_rec;
if c1%NOTFOUND then
--no data found
end if;
fetch c1 into l_my_table_rec2;
if c1%FOUND THEN
--i have more then 1 row
end if;
close c1;
-- processing logic
end;
I hope you get the idea.
Option 2 - Exception Catching
declare
CURSOR C1 IS SELECT * FROM MY_TABLE WHERE SALARY < 50000 FOR UPDATE;
l_my_table_rec C1%rowtype;
begin
begin
select *
from my_table
into l_my_table_rec
where salary < 50000
for update;
exception
when too_many_rows then
-- handle the exception where more than one row is returned
when no_data_found then
-- handle the exception where no rows are returned
when others then raise;
end;
-- processing logic
end;
Additionally
Remember: with an explicit cursor.. you can %TYPE your variable off the cursor record rather then the original table.
this is especially useful when you have joins in your query.
Also, rememebr you can update the rows in the table with an
UPDATE table_name
SET set_clause
WHERE CURRENT OF cursor_name;
type statement, but I that will only work if you haven't 'fetched' the 2nd row..
for some more information about cursor FOR loops.. try
Here
If you're looking to fail whenver you have more than 1 row returned, try this:
declare
l_my_table_rec my_table%rowtype;
begin
begin
select *
from my_table
into l_my_table_rec
where salary < 50000
for update;
exception
when too_many_rows then
-- handle the exception where more than one row is returned
when no_data_found then
-- handle the exception where no rows are returned
when others then raise;
end;
-- processing logic
end;
If this is the way to do it, will
fetching all rows in my cursor
implicitly close it, thus unlocking
those rows
The locks will be present for the duration of the transaction (ie until you do a commit or rollback) irrespective of when (or whether) you close the cursor.
I'd go for
declare
CURSOR C1 IS SELECT * FROM MY_TABLE WHERE SALARY < 50000 FOR UPDATE;;
v_1 c1%rowtype;
v_cnt number;
begin
open c_1;
select count(*) into v_cnt FROM MY_TABLE WHERE SALARY < 50000 and rownum < 3;
if v_cnt < 2 then
raise_application_error(-20001,'...');
end if;
--other processing
close c_1;
end;
There's a very small chance that, between the time the cursor is opened (locking rows) and the select count, someone inserts one or more rows into the table with a salary under 50000. In that case the application error would be raised but the cursor would only process the rows present when the cursor was opened. If that is a worry, at the end do another check on c_1%rowcount and, if that problem was experienced, you'd need to rollback to a savepoint.
Create a savepoint before you iterate through the cursor and then use a partial rollback when you find there are < 2 records returned.
You can start transaction and check if SELECT COUNT(*) MY_TABLE WHERE SALARY < 50000 greater than 1.