Basically, a cursor is an area of memory which is used to store the result of a particular query. One question I have is do cursors implicitly loop through all the records? Suppose I write a code snippet like the following:
declare
cursor cur_dum is
select name,class,enroll_id from table_student;
begin
fetch cur_dum into the_name, the_class, the_enroll_id;
update table_log set statement = the_name || '-'||'-'||to_char(the_enroll_id)
where roll_id = the_enroll_id;
close cur_dum;
end;
Will this code snippet,without any explicit statement of loop, automatically loop through all the records in table_student and perform the corresponding update in table_log ?Do I need to add a loop after the fetch statement ? What difference would it make if I use a Bulk collect statement during fetching ?
From the answer , I got it that explicitly stating a loop is necessary .
I came across a snippet of code which used a loop for a cursor and also used a for loop inside it .The following is the code snippet :
Cursor CurSquirell IS
select Name,a_val,b_val,col_ID from table_temp;
BEGIN
LoopCounter := 0;
commit;
LOOP
FETCH CurSquirell BULK COLLECT INTO my_name,my_a_val,my_b_val,my_col_id LIMIT 1000;
LoopCounter := LoopCounter + 1;
FOR intIndex IN 1 .. my_col_id.COUNT LOOP
counter := counter +1;
BEGIN
select t.tender_val,t.tender_pay, t.page_no, t.loc
into my_tender_val,my_tender_pay,my_page_no , my_loc
from bussiness_trans bt, tender_details t
where t.account_no = bt.account_no
and bt.external_id=my_col_id(intIndex)
and trim(replace(t.tender_pay,'0',' ')) = trim(replace(a_val(intIndex),'0',' '))
and bt.id_type=1;
BEGIN
select pp.lock_id into my__lock_id
from pay_roll pp
where pp.pay_points= my_tender_pay
and bt.id_type=5;
BEGIN
update tab_cross_exchange tce
set tce.cross_b_val = my_b_val(intIndex)
where tce.lock_id = my_lock_id;
..............................sql statements...
...sql statements...
end;
end;
end;
When in the code loop has already been used to go through the records one by one , why has the for loop been used ? In what situations would you require a for loop like this inside a cursor loop ? Does the bulk collect has to do anything to force the usage of For loop ?
"a cursor is an area of memory which is used to store the result of a particular query"
Not quite. A cursor is a pointer to an area of memory used to store information about a query. Results of the query are stored in other areas of memory.
The PL/SQL syntax you use specifies a variable which defines a query. To execute the query you need to
Open the cursor
Fetch the data into target variable(s)
When finished, close the cursor
Each fetch returns one row. To exhaust the query you need to execute the fetch in a loop. This is the verbose way of doing so:
declare
cursor cur_dum is
select name,class,enroll_id from table_student;
rec_dum cur_dum%rowtype;
begin
open cur_dum;
loop
fetch cur_dum into rec_dum;
exit when cur_dum%notfound;
update table_log
set statement = rec_dum.name || '-'||'-'||to_char(rec_dum.enroll_id)
where roll_id = rec_dum.enroll_id;
end loop;
close cur_dum;
end;
Note: one benefit of this explicit cursor notation is that we can define a variable typed to the projection of the cursor's query (rec_dum above).
This is the same logic using implicit cursor notation:
declare
cursor cur_dum is
rec_dum cur_dum%rowtype;
begin
for rec_dum in (select name,class,enroll_id from table_student)
loop
update table_log
set statement = rec_dum.name || '-'||'-'||to_char(rec_dum.enroll_id)
where roll_id = rec_dum.enroll_id;
end loop;
end;
" Does the bulk collect has to do anything to force the usage of For loop ?"
BULK COLLECT is the the syntax which allows us to populate a nested table variable with a set of records and so do bulk processing rather than the row-by-row processing of the basic FETCH illustrated above; the snippet you quote grabs a sub-set of 1000 records at a time, which is necessary when dealing with large amounts of data because variables populate private (session) memory rather than global (shared memory). The code you quoted is very poor, not least because the FETCH ... BULK COLLECT INTO statement is not followed by a test for whether the FETCH returned any values. Because there's no test the subsequent code will fail at runtime.
"Does the usage of for loop inside the cursor loop make the code poor ? "
No, not automatically. For instance when doing bulk processing we may often do something like this:
<< batch_loop >>
loop
fetch dum_cur bulk collect into dum_recs limit 1000;
exit when dum_recs.count() = 0;
<< row_loop >>
for idx in dum_recs.first()..dum_recs.last()
loop
do_something(dum_recs(idx));
end loop row_loop;
end loop batch_loop;
However, we should be suspicious of nested CURSOR FOR loops. Nested loops are common in 3GL programs like Java:
for (int i = 1; i <= 5; i++) {
for (int j = 1; j <= 10; j++) {
So developers familiar with that style of coding often reach for nested loops when moving to PL/SQL. But SQL is a set-based paradigm. There are usually better ways of implementing that logic, such as a JOIN: make the two cursors into one.
Related
I'll start by saying what I think I have understood.
A explicit cursor is used because we need to reuse the query later.
If an non-explicit cursor is used (i.e. for cs in (select .........)), the request is reexecuted each time the cursor is used.. Therefore an explicit cursor is used for efficienty.
To factor the code, we can use a "pipelined table function" or a "view" to create a cursor.
I would to know like why should I use one solution over another.
Here is the pro and con that I know about these solutions:
neiter pro nor con
I can extract a part of the view or pipelined table function function with a select statement.
con
The record type, and table type used by the "pipelined table" must be declared. It takes time
pro:
We can use all the possibilities of pl/sql statement inside a pipelined table such as a loop
Is all what I have said true?
Is there other things that I should know?
With both cursor types, the database executes the statement within it when it opens. Provided the cursor remains open, you can fetch the results from it later without re-running it. So both are equally efficient in that respect.
An explicit cursor is one where you control its full lifecycle: open, fetch, and close. With an implicit cursor, PL/SQL handles this for you.
You use an explicit cursor when you want full control over the fetch process. The main use case for this bulk collection with a limit.
An explicit cursor can also be handy if you want to use the same query in many places in your application. Declare it at the package level and you can reference it anywhere else you like:
create or replace package pkg as
cursor common_cursor is
select ...
end;
This gives a single definition for the query, which can make your code more maintainable. The problem with this is you're on the hook for opening, fetching, and closing it wherever you use it. In most cases, this results in much more code for minimal benefit.
Which brings us to views. Instead of declaring the common cursor, you could place the common query in a view:
create or replace view common_query as
select ...;
You can then use this in any other SQL statement just like a regular table. So you can join to it, etc. You can't do this with an explicit cursor directly. You have to wrap it in a (pipelined) table function:
create or replace function pipetf
return ...
pipelined
as
retvals ...;
begin
open pkg.common_cursor;
loop
fetch pkg.common_cursor
bulk collect into retvals limit 100;
exit when retvals.count = 0;
for i in 1 .. retvals.count loop
pipe row ( retvals ( i ) ) ;
end loop;
end loop;
close pkg.common_cursor ;
return;
end pipetf;
/
This allows you to use the cursor within another SQL statement like a view:
select * from pipetf;
At this point, a pipelined table function seems a lot more faff than a view. So why bother?
Well it allows you to do things views can't (easily):
Generate new rows or manipulate the result set procedurally
Create parameterized queries
In general you can't pass a variable to a query like this a view (there are ways, but they come with gotchas):
select c2 from ...
where c1 = :var
group by c2;
Whereas you can in an explicit cursor:
cursor common_cursor ( var int ) is
select c2 from ...
where c1 = var
group by c2;
So you could use this in a PTF to create a reusable, parameterized query:
create or replace function pipetf ( var int )
return ...
pipelined
as
retvals ...;
begin
open pkg.common_cursor ( var );
loop
fetch pkg.common_cursor
bulk collect into retvals limit 100;
exit when retvals.count = 0;
for i in 1 .. retvals.count loop
pipe row ( retvals ( i ) ) ;
end loop;
end loop;
close pkg.common_cursor ;
return;
end pipetf;
/
So if you need to use PL/SQL to create new rows, manipulate a queries results, or want reusable parameterized queries, pipelined table functions were the way to go.
Why were?
Oracle Database 18c added polymorphic table functions, which covers many of the row generation/result manipulation examples. And from 19.6 you can create SQL macros, which you can use to emulate parameterized views. These features cover most (all?) the use cases for pipelined table functions (and more).
If you just need a reusable query with no extra processing, I'd stick with a view.
I have one table called EMP with 140000 rows and I need , to keep entire data into collection .How to extend collection and load entire data into collection using "BULK COLLECT ..LIMIT" clause feature.
The below logic not providing required result , since data has been overridden with new records.Please suggest me the logic.
DECLARE
CURSOR c_get_employee IS
SELECT empno,
ename,
deptno,
sal
FROM emp;
TYPE t_employee
IS TABLE OF c_get_employee%ROWTYPE INDEX BY inary_integer;
l_employee T_EMPLOYEE;
BEGIN
OPEN c_get_employee;
LOOP
FETCH c_get_employee bulk collect INTO l_employee limit 300;
EXIT WHEN l_employee.count = 0;
END LOOP;
CLOSE c_get_employee;
FOR i IN 1..l_employee.count LOOP
dbms_output.Put_line (L_employee(i).ename
||'<-->'
||L_employee(i).sal);
END LOOP;
EXCEPTION
WHEN OTHERS THEN
dbms_output.Put_line ('Unexpected error :- '
|| SQLERRM);
END;
You are exiting the loop too early. You need stop the fetch loop after the for loop and close cursor after that.
Also, as #APC pointed out, the exit condition should use count of fetched results instead of NOTFOUND on cursor. Otherwise, if the last fetch has lesser records than the fetch size, the NOTFOUND will be true and loop terminates incorrectly.
Try this:
DECLARE
CURSOR c_get_employee IS
SELECT empno,
ename,
deptno,
sal
FROM emp;
TYPE t_employee
IS TABLE OF c_get_employee%ROWTYPE INDEX BY binary_integer;
l_employee T_EMPLOYEE;
BEGIN
OPEN c_get_employee;
LOOP
FETCH c_get_employee bulk collect INTO l_employee limit 3;
EXIT WHEN l_employee.count = 0;
FOR i IN 1..l_employee.count LOOP
dbms_output.Put_line (L_employee(i).ename
||'<-->'
||L_employee(i).sal);
END LOOP;
END LOOP;
CLOSE c_get_employee;
EXCEPTION
WHEN OTHERS THEN
dbms_output.Put_line ('Unexpected error :- '
|| SQLERRM);
END;
The below logic is not giving required result
Wild guess: you're only getting twelve rows. This is a familiar gotcha with LIMIT clause. This line is the problem:
EXIT WHEN c_get_employee%NOTFOUND;
You have fourteen records in EMP: The limit of 3 means you collect four sets of records. The last FETCH only collects 2 records. PL/SQL interprets this as NOTFOUND. The solution is to check the size of the collection:
EXIT WHEN l_employee.count() = 0;
I want to load entire data into collection and close the cursor.After that I want to open collection and use data for business logic
That's not how BULK COLLECT ... LIMIT works. The point of the LIMIT clause is to, er, limit the number of records fetched at a time. We need to do this when the queried data is too big to handle in a single fetch. PL/SQL collections are memory structures held in the session's allocation of memory: if they get too big they will blow the PGA. (Definition of "too big" will depend on how your DBA has configured the PGA.)
So, if you have a small result set, ditch the LIMIT clause and populate the collection in a single fetch. But if you have sufficient data to require the LIMIT clause you need to include the business logic loop inside the fetch loop.
I want to create a local collection with values I get from a select statement and then go over them with a FOR Loop.
I can't define the TYPE outside of the PL/SQL Block because I am not supposed to modify the Database I am querying. I saw several other threads here in SO suggesting to create a Type outside the PL SQL Block.
I want to use the Index to go over the items, because I need to do something special with the last one, I need to differentiate it from the rest.
Here a simplified version of my latest attempt:
DECLARE
TYPE itemsTYPE IS TABLE OF MY_TABLE.ITEM%TYPE;
items itemsTYPE;
BEGIN
SELECT DISTINCT ITEM INTO items
From MY_TABLE;
FOR currentIndex IN 1..items.Count
LOOP
IF currentIndex = items.Count THEN
DBMS_OUTPUT.PUT_LINE('END');
ELSE
DBMS_OUTPUT.PUT_LINE(items(currentIndex));
END IF;
END LOOP;
END;
In this attempt I get the following error:
PLS-00642: local collection types not allowed in SQL statements
I understand this error, but I don't know how to change my code so it works.
Before I tried different things and I got all kind of errors, I can't seem to find the right way to achieve what I need. Is it even possible?
Change it like this:
DECLARE
TYPE itemsTYPE IS TABLE OF MY_TABLE.ITEM%TYPE;
items itemsTYPE;
BEGIN
SELECT DISTINCT ITEM BULK COLLECT INTO items
From MY_TABLE;
FOR currentIndex IN 1..items.Count
LOOP
IF currentIndex = items.Count THEN
DBMS_OUTPUT.PUT_LINE('END');
ELSE
DBMS_OUTPUT.PUT_LINE(items(currentIndex));
END IF;
END LOOP;
END;
I have a stored procedure which performs some transactions (insert / update) and want to know which of these two options run "COMMIT" more efficiently:
OPTION 1:
BEGIN
OPEN myCursor;
LOOP
FETCH myCursor INTO AUX_ID, AUX_VAR1, AUX_VAR2;
EXIT WHEN myCursor%NOTFOUND;
SELECT count(*) INTO myCount FROM myTable WHERE code = AUX_ID;
IF myCount > 0 THEN
UPDATE myTable
SET VAR1 = AUX_VAR1, VAR2 = AUX_VAR2
WHERE code = AUX_ID_BD;
COMMIT;
ELSE
INSERT INTO myTable(code, VAR1, VAR2)
VALUES(AUX_ID, AUX_VAR1, AUX_VAR2)
COMMIT;
END IF;
END LOOP;
CLOSE myCursor;
END;
OR OPTION 2:
BEGIN
OPEN myCursor;
LOOP
FETCH myCursor INTO AUX_ID, AUX_VAR1, AUX_VAR2;
EXIT WHEN myCursor%NOTFOUND;
SELECT count(*) INTO myCount FROM myTable WHERE code = AUX_ID;
IF myCount > 0 THEN
UPDATE myTable
SET VAR1 = AUX_VAR1, VAR2 = AUX_VAR2
WHERE code = AUX_ID_BD;
ELSE
INSERT INTO myTable(code, VAR1, VAR2)
VALUES(AUX_ID, AUX_VAR1, AUX_VAR2)
END IF;
END LOOP;
COMMIT;
CLOSE myCursor;
END;
it's okay? or is there a better way?
Option #2 is definitely more efficient, although it's hard to tell if it will be noticeable in your case.
Every COMMIT requires a small amount of physical I/O; Oracle must ensure all the data is written to disk, the system change number (SCN) is written to disk, and there are probably other consistency checks I'm not aware of. In practice, it takes a huge number of COMMITs from multiple users to significantly slow down a database. When that happens you may see unusual wait events involving REDO, control files, etc.
Before a COMMIT is issued, Oracle can make the changes in memory or asynchronously. This may allow the performance to be equivalent to an in-memory database.
An even better option is to avoid the issue entirely by using a single MERGE statement, as Sylvain Leroux suggested. If the processing must be done in PL/SQL, at least replace the OPEN/FETCH cursor syntax with a simpler cursor FOR-loop. A cursor FOR-loop will automatically bulk collect data, significantly improving read performance.
I have a routine written in T-SQL for SQL Server. We are migrating to Oracle so I am trying to port it to PL/SQL. Here is the T-SQL routine (simplified); note the use of the table-valued variable which, in Oracle, will become a "nested table" type PL/SQL variable. The main thrust of my question is on the best ways of working with such "collection" objects within PL/SQL. Several operations in the ported code (second code sample, below) are quite awkward, where they seemed a lot easier in the SQL Server original:
DECLARE #MyValueCollection TABLE( value VARCHAR(4000) );
DECLARE #valueForThisRow VARCHAR(4000);
DECLARE #dataItem1Val INT, #dataItem2Val INT, #dataItem3Val INT, #dataItem4Val INT;
DECLARE theCursor CURSOR FAST_FORWARD FOR
SELECT DataItem1, DataItem2, DataItem3, DataItem4 FROM DataTable;
OPEN theCursor;
FETCH NEXT FROM theCursor INTO #dataItem1Val, #dataItem2Val, #dataItem3Val, #dataItem4Val;
WHILE ##FETCH_STATUS = 0
BEGIN
-- About 50 lines of logic that evaluates #dataItem1Val, #dataItem2Val, #dataItem3Val, #dataItem4Val and constructs #valueForThisRow
SET #valueForThisRow = 'whatever';
-- !!! This is the row that seems to have no natural Oracle equivalent
INSERT INTO #MyValueCollection VALUES(#valueForThisRow);
FETCH NEXT FROM theCursor INTO #dataItem1Val, #dataItem2Val, #dataItem3Val, #dataItem4Val;
END;
CLOSE theCursor;
DEALLOCATE theCursor;
-- !!! output all the results; this also seems harder than it needs to be in Oracle
SELECT * FROM #MyValueCollection;
I have been able to port pretty much everything, but in two places (see comments in the code), the logic is a lot more complex than the old SQL Server way, and I wonder if there might be, in Oracle, some more graceful way that is eluding me:
set serveroutput on; -- needed for DBMS_OUTPUT; see below
DECLARE
TYPE StringList IS TABLE OF VARCHAR2(4000);
myValueCollection StringList;
dummyTempCollection StringList; -- needed for my kludge; see below
valueForThisRow VARCHAR2(4000);
BEGIN
-- build all the sql statements
FOR c IN (
SELECT DataItem1, DataItem2, DataItem3, DataItem4 FROM DataTable;
)
LOOP
-- About 50 lines of logic that evaluates c.DataItem1, c.DataItem2, c.DataItem3, c.DataItem4 and constructs valueForThisRow
valueForThisRow := 'whatever';
-- This seems way harder than it should be; I would rather not need an extra dummy collection
SELECT valueForThisRow BULK COLLECT INTO dummyTempCollection FROM dual; -- overwrites content of dummy temp
myValueCollection := myValueCollection MULTISET UNION dummyTempCollection; -- merges into main collection
END LOOP;
-- output all the results... again, there's no shorter/easier/more-compact/single-line equivalent?
IF myValueCollection.COUNT > 0
THEN
FOR indx IN myValueCollection.FIRST .. myValueCollection.LAST
LOOP
DBMS_OUTPUT.PUT_LINE(myValueCollection(indx));
END LOOP;
END IF;
END;
/
Thanks in advance for any help!
Personally, I'd take the "50 lines of logic", move it into a function that you call in your SQL statement, and then do a simple BULK COLLECT to load the data into your local collection.
Assuming that you really want to load data element-by-element into the collection, you can simplify the code that loads the collection
DECLARE
TYPE StringList IS TABLE OF VARCHAR2(4000);
myValueCollection StringList := StringList();
valueForThisRow VARCHAR2(4000);
BEGIN
-- build all the sql statements
FOR c IN (
SELECT DataItem1, DataItem2, DataItem3, DataItem4 FROM DataTable;
)
LOOP
-- About 50 lines of logic that evaluates c.DataItem1, c.DataItem2, c.DataItem3, c.DataItem4 and constructs valueForThisRow
valueForThisRow := 'whatever';
myValueCollection.extend();
myValueCollection( myValueCollection.count ) := valueForThisRow;
END LOOP;
-- output all the results... again, there's no shorter/easier/more-compact/single-line equivalent?
IF myValueCollection.COUNT > 0
THEN
FOR indx IN myValueCollection.FIRST .. myValueCollection.LAST
LOOP
DBMS_OUTPUT.PUT_LINE(myValueCollection(indx));
END LOOP;
END IF;
END;
/
If you declare the collection as an associative array, you could avoid calling extend to increase the size of the collection. If you know the number of elements that you are going to load into the collection, you could pass that to a single extend call outside the loop. Potentially, you can also eliminate the valueForThisRow local variable and just operate on elements in the collection.
As for the code that processes the collection, what is it that you are really trying to do? It would be highly unusual for production code to write to dbms_output and expect that anyone will see the output during normal processing. That will influence the way that you would write that code. Assuming that your intention is really to just call dbms_output, knowing that will generally send the data into the ether
FOR indx IN 1 .. myValueCollection.count
LOOP
dbms_output.put_line( myValueCollection(indx) );
END LOOP;
This works when you have a dense collection (all indexes between 1 and the count of the collection exist and have values). If you might have a sparse collection, you would want to use FIRST, NEXT, and LAST in a loop but that's a bit more code.