Questions related to efficient DML in procedure/functions - performance

I have 2 questions regarding performance of PL/SQL script when executing DML. Ofcourse the EXECUTE IMMEDIATE is the slowest one thats why we have forall, bulk insert etc. My Questions are
I have to manipulate data in 3 different tables. Table1 (insert data), Table2(update data) and Table3 delete data. All of these would be done based on the values fetched using a cursor. the question is what would be more efficient here?
Putting each of these statements in individual Forall block? i.e.
fetch cursor
loop
forall loop for table 1
forall loop for table 2
forall loop for table 3
end loop
OR
a global loop and execute these statments in that loop i.e.
fetch cursor loop
for i IN array.count
loop
3 statements for DML
end loop end loop
Now my second question
what is the efficient way to delete records in loop? I fetched the values of the records to be deleted through the cursor. now what would be the efficient way to delete them?
P.S:
Execuse my formatting

The most efficient approach would be to write three SQL statements, assuming the data fetched from the cursor is stable over the period of time that the procedure is running
INSERT INTO table1( list_of_columns )
<<your SELECT statement>>
UPDATE table2
SET (<<list of columns>>) = (<<your SELECT statement joined to table2>>)
WHERE EXISTS( <<your SELECT statement joined to table2>> );
DELETE FROM table3
WHERE EXISTS( <<your SELECT statement joined to table3>> );
If the SELECT statement will potentially return different results in each of the three DML statements, then it makes sense to accept the performance hit of using a cursor, bulk collecting the data into PL/SQL collections, and looping over the collections in order to ensure consistent results. If that's what you're doing, it will be more efficient to have three FORALL statements since that involves fewer context shifts between the SQL and PL/SQL engines.
What is the efficient way to delete records in loop? I fetched the values of the records to be deleted through the cursor. now what would be the efficient way to delete them?
I'm not sure I understand the question. Wouldn't you just do a FORALL loop just as you would for an INSERT or an UPDATE
FORALL i IN l_array.first .. l_array.last
DELETE FROM some_table
WHERE some_key = l_array(i);
Or are you asking a different question?

Related

Nested cursor in a cursor

I have a cursor which is
CURSOR B_CUR IS select DISTINCT big_id from TEMP_TABLE;
This would return multiple values. Earlier it was being used as
FOR b_id IN B_CUR LOOP
select s.col1, s.col2 INTO var1, var2 from sometable s where s.col3 = b_id.col1;
END LOOP;
Earlier it was certain that the inner select query would always return 1 row. Now this query can return multiple rows. How can I change this logic?
I was thinking to create a nested cursor which will fetch into an array of record type (which i will declare) but I have no idea how nested cursor would work here.
My main concern is efficiency. Since it would be working on millions of records per execution. Could you guys suggest what would be the best approach here?
Normally, you would just join the two tables.
FOR some_cursor IN (SELECT s.col1,
s.col2
FROM sometable s
JOIN temp_table t ON (s.col3 = t.col1))
LOOP
<<do something>>
END LOOP
Since you are concerned about efficiency, however
Is TEMP_TABLE really a temporary table? If so, why? It is exceedingly rare that Oracle actually needs to use temporary tables so that leads me to suspect that you're probably doing something inefficient to populate the temporary table in the first place.
Why do you have a cursor FOR loop to process the data from TEMP_TABLE? Row-by-row processing is the slowest way to do anything in PL/SQL so it would generally be avoided if you're concerned about efficiency. From a performance standpoint, you want to maximize SQL so that rather than doing a loop that did a series of single-row INSERT or UPDATE operations, you'd do a single INSERT or UPDATE that modified an entire set of rows. If you really need to process data in chunks, that's where PL/SQL collections and bulk processing would come in to play but that will not be as efficient as straight SQL.
Why do you have the DISTINCT in your query against TEMP_TABLE? Do you really expect that there will be duplicate big_id values that are not erroneous? Most of the time, people use DISTINCT incorrectly either to cover up problems where data has been joined incorrectly or where you're forcing Oracle to do an expensive sort just in case incorrect data gets created in the future when a constraint would be the more appropriate way to protect yourself.
FOR b_id IN B_CUR LOOP
for c_id in (select s.col1, s.col2 INTO var1, var2 from sometable s where s.col3 = b_id.col1)loop
......
end loop;
END LOOP;

bulk collect in oracle

How to query bulk collection? If for example I have
select name
bulk collect into namesValues
from table1
where namesValues is dbms_sql.varchar2_table.
Now, I have another table XYZ which contains
name is_valid
v
h
I want to update is_valid to 'Y' if name is in table1 else 'N'. Table1 has 10 million rows. After bulk collecting I want to execute
update xyz
set is_valid ='Y'
where name in namesValue.
How to query namesValue? Or is there is another option. Table1 has no index.
please help.
As Tom Kyte (Oracle Corp. Vice President) says:
My mantra, that I'll be sticking with thank you very much, is:
You should do it in a single SQL statement if at all possible.
If you cannot do it in a single SQL Statement, then do it in PL/SQL.
If you cannot do it in PL/SQL, try a Java Stored Procedure.
If you cannot do it in Java, do it in a C external procedure.
If you cannot do it in a C external routine, you might want to
seriously think about why it is you need to do it…
think in sets...
learn all there is to learn about SQL...
You should perform your update in SQL if you can. If you need to add an index to do this then that might be preferable to looping through a collection populated with BULK COLLECT.
If however, this is some sort of assignment....
You should specify it as such but here's how you would do it.
I have assumed that your DB server does not have the capacity to hold 10 million records in memory so rather than BULK COLLECTing all 10 million records in one go I have put the BULK COLLECT into a loop to reduce your memory overheads. If this is not the case then you can omit the bulk collect loop.
DECLARE
c_bulk_limit CONSTANT PLS_INTEGER := 500000;
--
CURSOR names_cur
IS
SELECT name
FROM table1;
--
TYPE namesValuesType IS TABLE OF table1.name%TYPE
INDEX BY PLS_INTEGER;
namesValues namesValuesType;
BEGIN
-- Populate the collection
OPEN name_cur;
LOOP
-- Fetch the records in a loop limiting them
-- to the c_bulk_limit amount at a time
FETCH name_cur BULK COLLECT INTO namesValues
LIMIT c_bulk_limit;
-- Process the records in your collection
FORALL x IN INDICES OF namesValues
UPDATE xyz
SET is_valid ='Y'
WHERE name = namesValue(x)
AND is_valid != 'Y';
-- Set up loop exit criteria
EXIT WHEN namesValues.COUNT < c_bulk_limit;
END LOOP;
CLOSE name_cur;
-- You want to update all remaining rows to 'N'
UPDATE xyz
SET is_valid ='N'
WHERE is_valid IS NULL;
EXCEPTION
WHEN others
THEN
IF name_cur%ISOPEN
THEN
CLOSE name_cur;
END IF;
-- Re-raise the exception;
RAISE;
END;
/
Depending upon your rollback segment sizes etc. you may want to issue interim commits within the bulk collect loop but be aware that you will not then be able to rollback these changes. I deliberately haven't added any COMMITs to this so you can choose where to put them to suit your system.
You also might want to change the size of the c_bulk_limit constant depending upon the resources available to you.
Your update will still cause you problems if the xyz table is large and there is no index on the name column.
Hope it helps...
"Table1 has no index."
Well there's your problem right there. Why not? Put an index on TABLE1.NAME and use a normal SQL UPDATE to amend the data in XYZ.
Trying to solve this problem with bulk collect is not the proper approach.

Oracle pl sql 10g - move set of rows from a table to a history table with same structure

PL SQL moves older versions of data from a transaction table to a history table of same structure and archive for a certain period -
for each record
insert into tab_hist (select older_versions of current row);
delete from tab (select older_versions of current row);
END
ps: earlier we were not archiving(no insert) - but after adding the insert it has doubled the run time - so can we accomplish insert and delete with a single select statement? as there is large data to be processed and across multiple table
This is a batch operation, right? In which case you should avoid Row By Row and use set processing. SQL is all about The Joy Of Sets.
Oracle has fantastic bulk SQL processing capabilities. The pseudo code you paosted would look something like this:
declare
cursor c_oldrecs is
select * from your_table
where criterion between some_date and some_other_date;
type rec_nt is table of your_table%rowtype;
oldrecs_coll rec_nt;
begin
open c_oldrecs;
loop
fetch c_oldrecs into oldrecs_coll limit 1000;
exit when oldrecs_coll.count() = 0;
forall i in oldrecs_coll.first() oldrecs_coll.last()
insert into your_table_hist
values oldrecs_coll(i);
forall i in oldrecs_coll.first() oldrecs_coll.last()
delete from your_table
where pk_col = oldrecs_coll(i).pk_col;
end loop;
end;
/
This bulk processing is faster because it sends one thousand statements to the database at a time, instead of switching between PL/SQL and SQL one thousand times. The LIMIT 1000 clause is there to prevent a really huge selection blowing the PGA. This safeguard may not be necessary in your case, or perhaps you can work with a higher value.
I think your current implementation is wrong. It is better to keep only the current version in the live table, and to keep all the historical versions in a separate table from the off. Use triggers to maintain the history as part of every transaction.
It may be that the slowness you are seeing is due to the logic that selects which rows are to be moved. If so, you might get better results by doing the select once to get the rowids into a nested table in memory, then doing the insert and the delete based on that list; or alternatively, driving your loop with a query that selects the rows to be moved.
You might instead consider creating a trigger on insert that will move the existing rows that "match" the row being inserted. This will slow down the inserts somewhat, but would mean you don't need any process to move the old rows in bulk.
If you are on Enterprise edition with the partitioning option, look at partition exchange.
As simple as this
CREATE BACKUP_TAB AS SELECT * FROM TAB
If you are deleting a lot of rows you will be hitting your undo tablespace and a delete which deletes say 100k rows can cause performance issues. You are better of deleting by batch say 5k rows at a time and committing.
BEGIN
-- Where condition on insert and delete must be the same
loop
INSERT INTO BACKUP_TAB SELECT * FROM TAB WHERE 1=1 and rownum < 5000; --Your condition here
exit when SQL%rowcount < 4999;
commit;
end loop;
loop
DELETE FROM TAB
where 1=1--Your condition here
and rownum < 5000;
exit when SQL%rowcount < 4999;
commit;
end loop;
commit;
END;

Performance problem with Oracle BULK FETCH and FORALL insert

I am trying to copied records from one table to another as fast as possible.
Currently I have a simple cursor loop similiar to this:
FOR rec IN source_cursor LOOP
INSERT INTO destination (a, b) VALUES (rec.a, rec.b)
END LOOP;
I want to speed it up to be super fast so am trying some BULK operations (a BULK FETCH, then a FORALL insert):
Here is what I have for the bulk select / forall insert.
DECLARE
TYPE t__event_rows IS TABLE OF _event%ROWTYPE;
v__event_rows t__event_rows;
CURSOR c__events IS
SELECT * FROM _EVENT ORDER BY MESSAGE_ID;
BEGIN
OPEN c__events;
LOOP
FETCH c__events BULK COLLECT INTO v__event_rows LIMIT 10000; -- limit to 10k to avoid out of memory
EXIT WHEN c__events%NOTFOUND;
FORALL i IN 1..v__event_rows.COUNT SAVE EXCEPTIONS
INSERT INTO destinatoin
( col1, col2, a_sequence)
VALUES
( v__event_rows(i).col1, v__event_rows(i).col2, SOMESEQEUENCE.NEXTVAL );
END LOOP;
CLOSE c__events;
END;
My problem is that I'm not seeing any big gains in performance so far. From what I read it should be 10x-100x faster.
Am I missing a bottleneck here somewhere?
The only benefit your code has over a simple INSERT+SELECT is that you save exceptions, plus (as Justin points out) you have a pointless ORDER BY which is making it do a whole lot of meaningless work. You then don't have any code to do anything with the exceptions that were saved, anyway.
I'd just implement it as a INSERT+SELECT.
You donot have to use loops unnecessarily until it is required in the coding itself.

ORA-14551: cannot perform a DML operation inside a query

I have the following inside a package and it is giving me an error:
ORA-14551: cannot perform a DML operation inside a query
Code is:
DECLARE
CURSOR F IS
SELECT ROLE_ID
FROM ROLE
WHERE GROUP = 3
ORDER BY GROUP ASC;
BEGIN
FOR R IN F LOOP
DELETE FROM my_gtt_1;
COMMIT;
INSERT INTO my_gtt_1
( USER, role, code, status )
(SELECT
trim(r.user), r.role, r.code, MAX(status_id)
FROM
table1 r,
tabl2 c
WHERE
r.role = R.role
AND r.code IS NOT NULL
AND c.group = 3
GROUP BY
r.user, r.role, r.code);
SELECT c.role,
c.subgroup,
c.subgroup_desc,
v_meb_cnt
INTO record_type
FROM ROLE c
WHERE c.group = '3' and R.role = '19'
GROUP BY c.role,c.subgroup,c.subgroup_desc;
PIPE ROW (record_type);
END LOOP;
END;
I call the package like this in one of my procedures...:
OPEN cv_1 for SELECT * FROM TABLE(my_package.my_func);
how can I avoid this ORA-14551 error?
FYI I have not pasted the entire code inside the loop. Basically inside the loop I am entering stuff in GTT, deleting stuff from GTT and then selecting stuff from GTT and appending it to a cursor.
The meaning of the error is quite clear: if we call a function from a SELECT statement it cannot execute DML statements, that is INSERT, UPDATE or DELETE, or indeed DDL statements come to that.
Now, the snippet of code you have posted contains a call to PIPE ROW, so plainly you are calling this as SELECT * FROM TABLE(). But it includes DELETE and INSERT statements so clearly it falls foul of the purity levels required for functions in SELECT statements.
So, you need to remove those DML statements. You are using them to populate a global temporary table, but this is good news. You haven't include any code which actually uses the GTT so it is difficult to be sure, but using GTTs is often unnecessary. With more details we can suggest workarounds.
Is this related to this other question of yours? If so, did you follow my advice to check that answer I had given to a similar question?
For the sake of completeness, it is possible to include DML and DDL statements in a function called in a SELECT statement. The workaround is to use the AUTONOMOUS_TRANSACTION pragma. This is rarely a good idea, and certainly wouldn't help in this scenario. Because the transaction is autonomous the changes it makes are invisible to the calling transaction. Meaning in this case that the function cannot see the outcome of the deletion or insertion in the GTT.
The error means you are SELECTing from a function which modifies data (DELETE, INSERT in your case).
Remove the data modification statements from that function into a separate SP, if you need that functionality. (I guess I don't understand from the code snippet why you want to delete and insert inside the loop)

Resources