How many times does a query defined in a cursor execute? - oracle

I have a stored procedure in a Oracle database.
In this stored procedure I have defined a cursor.
I iterate through the result set using:
FOR item IN cursor_name LOOP
END LOOP;
How many times does the query execute ? Is there a way for me to know ? and also is this the best approach, or should I iterate in a different way ?
Thanks.

The query in the cursor cursor_name is executed only once. How many times you fetch from that cursor depends. Each fetch means a context switch. From Oracle version 10 onwards, if you set parameter PLSQL_OPTIMIZE_LEVEL to its default of 2 or higher, an optimization kicks in and you will fetch 100 rows at a time. Without this, you would fetch each row separately. Which would harm performance considerably when fetching a lot of rows.
Also beware that you don't put SQL statements inside the loop. When you do that, you will obviously execute those statements as many times as there are rows fetched from your cursor.
Regards,
Rob.

Related

Oracle: Return Large Dataset with Cursor in Procedure

I've seen lots of posts regarding the use of cursors in PL/SQL to return data to a calling application, but none of them touch on the issue I believe I'm having with this technique. I am fairly new to Oracle, but have extensive experience with MSSQL Server. In SQL Server, when building queries to be called by an application for returning data, I usually put the SELECT statement inside a stored proc with/without parameters, and let the stored proc execute the statement(s) and return the data automatically. I've learned that with PL/SQL, you must store the resulting dataset in a cursor and then consume the cursor.
We have a query that doesn't necessarily return huge amounts of rows (~5K - 10K rows), however the dataset is very wide as it's composed of 1400+ columns. Running the SQL query itself in SQL Developer returns results instantaneously. However, calling a procedure that opens a cursor for the same query takes 5+ minutes to finish.
CREATE OR REPLACE PROCEDURE PROCNAME(RESULTS OUT SYS_REFCURSOR)
AS
BEGIN
OPEN RESULTS FOR
<SELECT_query_with_1400+_columns>
...
END;
After doing some debugging to try to get to the root cause of the slowness, I'm leaning towards the cursor returning one row at a time very slowly. I can actually see this real-time by converting the proc code into a PL/SQL block and using DBMS_SQL.return_result(RESULTS) after the SELECT query. When running this, I can see each row show up in the Script output window in SQL Developer one at a time. If this is exactly how the cursor returns the data to the calling application, then I can definitely see how this is the bottleneck as it could take 5-10 minutes to finish returning all 5K-10K rows. If I remove columns from the SELECT query, the cursor displays all the rows much faster, so it does seem like the large amount of columns is an issue using a cursor.
Knowing that running the SQL query by itself returns instant results, how could I get this same performance out of a cursor? It doesn't seem like it's possible. Is the answer putting the embedded SQL in the application code and not using a procedure/cursor to return data in this scenario? We are using Oracle 12c in our environment.
Edit: Just want to address how I am testing performance using the regular SELECT query vs the PL/SQL block with cursor method:
SELECT (takes ~27 seconds to return ~6K rows):
SELECT <1400+_columns>
FROM <table_name>;
PL/SQL with cursor (takes ~5-10 minutes to return ~6K rows):
DECLARE RESULTS SYS_REFCURSOR;
BEGIN
OPEN RESULTS FOR
SELECT <1400+_columns>
FROM <table_name>;
DBMS_SQL.return_result(RESULTS);
END;
Some of the comments are referencing what happens in the console application once all the data is returned, but I am only speaking regarding the performance of the two methods described above within Oracle\SQL Developer. Hope this helps clarify the point I'm trying to convey.
You can run a SQL Monitor report for the two executions of the SQL; that will show you exactly where the time is being spent. I would also consider running the two approaches in separate snapshot intervals and checking into the output from an AWR Differences report and ADDM Compare Report; you'd probably be surprised at the amazing detail these comparison reports provide.
Also, even though > 255 columns in a table is a "no-no" according to Oracle as it will fragment your record across > 1 database blocks, thus increasing the IO time needed to retrieve the results, I suspect the differences in the two approaches that you are seeing is not an IO problem since in straight SQL you report fast result fetching all. Therefore, I suspect more of a memory problem. As you probably know, PL/SQL code will use the Program Global Area (PGA), so I would check the parameter pga_aggregate_target and bump it up to say 5 GB (just guessing). An ADDM report run for the interval when the code ran will tell you if the advisor recommends a change to that parameter.

Explicit cursors using bulk collect vs. implicit cursors: any performance issues?

In an older article from Oracle Magazine (now online as On Cursor FOR Loops) Steven Feuerstein showed an optimization for explicit cursor for loops using bulk collect (listing 4 in the online article):
DECLARE
CURSOR employees_cur is SELECT * FROM employees;
TYPE employee_tt IS TABLE OF employees_cur%ROWTYPE INDEX BY PLS_INTEGER;
l_employees employee_tt;
BEGIN
OPEN employees_cur;
LOOP
FETCH employees_cur BULK COLLECT INTO l_employees LIMIT 100;
-- process l_employees using pl/sql only
EXIT WHEN employees_cur%NOTFOUND;
END LOOP;
CLOSE employees_cur;
END;
I understand that bulk collect enhances the performance because there are less context switches between SQL and PL/SQL.
My question is about implicit cursor for loops:
BEGIN
FOR S in (SELECT * FROM employees)
LOOP
-- process current record of S
END LOOP;
END;
Is there a context switch in each loop for each record? Is the problem the same as with explicit cursors or is it somehow optimized "behind the scene"? Would it be better to rewrite the code using explicit cursors with bulk collect?
Starting from Oracle 10g the optimizing PL/SQL compiler can automatically convert FOR LOOPs into BULK COLLECT loops with a default array size of 100.
So generally there's no need to convert implicit FOR loops into BULK COLLECT loops.
But sometimes you may want to use BULK COLLECT instead. For example, if the default array size of 100 rows per fetch does not satisfy your requirements OR if you prefer to update your data within a set.
The same question was answered by Tom Kyte. You can check it here: Cursor FOR loops optimization in 10g
Yes, even if your -- process current record of S contains pure SQL and no PL/SQL you have context switch as the FOR ... LOOP is PL/SQL but the query is SQL.
Whenever possible you should prefer to process your data with single SQL statements (consider also MERGE, not only DELETE, UPDATE, INSERT), in most cases they are faster than a row-by-row processing.
Note, you will not gain any performance if you make a loop through l_employees and perform DLL for each record.
LIMIT 100 is rather useless. Processing only 100 rows at once would be almost the same as processing rows one-by-one - Oracle does not run on Z80 with 64K Memory.

UPDATE statement: Returning into refcursor

I have a query that updates a set of records based on specific criteria. I want to get columns of the result set of that update statement and pass it back in a refcursor.
I can get the result set by using RETURNING INTO, or in my case, RETURNING myrows BULK COLLECT INTO .... However, I'm not sure how to make this work with a cursor - you can't do an OPEN cursor FOR with an update statement.
I'm guessing there's a way to get the results of a RETURNING statement into my cursor. How can I do this?
Assuming that you have a SQL collection defined (rather than a PL/SQL collection), you should be able to
RETURNING my_column
BULK COLLECT INTO my_collection;
and then
OPEN p_rc
FOR SELECT *
FROM TABLE( my_collection );
Though that works, there are some caveats. If you expect the UPDATE to modify a large number of rows (or you expect many sessions to be running this code), storing all this data in a collection may consume a large amount of space in the PGA which may negatively impact performance. Reading a bunch of data into a collection just to send it all back to the SQL engine also tends to be a bit inelegant. And, as I said initially, this assumes that your collection is declared at the SQL level rather than being declared in PL/SQL.

Return data rows from a pl/sql block

I want to write pl/sql code which utilizes a Cursor and Bulk Collect to retrieve my data. My database has rows in the order of millions, and sometimes I have to query it to fetch nearly all records on client's request. I do the querying and subsequent processing in batches, so as to not congest the server and show incremental progress to the client. I have seen that digging down for later batches takes considerably more time, which is why I am trying to do it by way of cursor.
Here is what should be simple pl/sql around my main sql query:
declare
cursor device_row_cur
is
select /my_query_details/;
type l_device_rows is table of device_row_cur%rowtype;
out_entries l_device_rows := l_device_rows();
begin
open device_row_cur;
fetch device_row_cur
bulk collect into out_entries
limit 100;
close device_row_cur;
end;
I am doing batches of 100, and fetching them into out_entries. The problem is that this block compiles and executes just fine, but doesn't return the data rows it fetched. I would like it to return those rows just the way a select would. How can this be achieved? Any ideas?
An anonymous block can't return anything. You can assign values to a bind variable, including a collection type or ref cursor, inside the block. But the collection would have to be defined, as well as declared, outside the block. That is, it would have to be a type you can use in plain SQL, not something defined in PL/SQL. At the moment you're using a PL/SQL type that is defined within the block, and a variable that is declared within the block too - so it's out of scope to the client, and wouldn't be a valid type outside it either. (It also doesn't need to be initialised, but that's a minor issue).
Dpending on how it will really be consumed, one option is to use a ref cursor, and you can declare and display that through SQL*Plus or SQL Developer with the variable and print commands. For example:
variable rc sys_refcursor
begin
open :rc for ( select ... /* your cursor statement */ );
end;
/
print rc
You can do something similar from a client application, e.g. have a function returning a ref cursor or a procedure with an out parameter that is a ref cursor, and bind that from the application. Then iterate over the ref cursor as a result set. But the details depend on the language your application is using.
Another option is to have a pipelined function that returns a table type - again defined at SQL level (with create type) not in PL/SQL - which might consume fewer resources than a collection that's returned in one go.
But I'd have to question why you're doing this. You said "digging down for later batches takes considerably more time", which sounds like you're using a paging mechanism in your query, generating a row number and then picking out a range of 100 within that. If your client/application wants to get all the rows then it would be simpler to have a single query execution but fetch the result set in batches.
Unfortunately without any information about the application this is just speculation...
I studied this excellent paper on optimizing pagination:
http://www.inf.unideb.hu/~gabora/pagination/article/Gabor_Andras_pagination_article.pdf
I used technique 6 mainly. It describes how to limit query to fetch page x and onward. For added improvement, you can limit it further to fetch page x alone. If used right, it can bring a performance improvement by a factor of 1000.
Instead of returning custom table rows (which is very hard, if not impossible to interface with Java), I eneded up opening a sys_refcursor in my pl/sql which can be interfaced such as:
OracleCallableStatement stmt = (OracleCallableStatement) connection.prepareCall(sql);
stmt.registerOutParameter(someIndex, OracleTypes.CURSOR);
stmt.execute();
resultSet = stmt.getCursor(idx);

Is a pl/sql refcursor meant to return few records only?

I have a query that performs well and is tuned too. I put that in a Procedure. When i execute the query from SQL with a set of values to bind variables used in it the result is produced in 3-4 seconds max.
The same resultset coming from refcursor is taking over 2 minutes to give the result. I understand it is the OPEN FETCH and CLOSE of the cursor that might be taking the time.
I have verified that nothing else in that procedure is consuming the time so that is ruled out.
The number of records returned is around 9000+ which brings me to the question - is a ref cursor somehow less suitable when recordset is of a size that is beyond some limit?
Is the RAM size a problem? I have used TOAD to execute both the query and the procedure to compare. And yes i have gone to the last record, so it is not like the query only returned the first few.
What else can be done to improve this REFCURSOR speed? Any help is much appreciated.
Are you using BULK COLLECT to grab multiple rows at once?
OPEN c_cursor;
LOOP
FETCH c_cursor
BULK COLLECT INTO l_tab LIMIT 1000; -- or no limit to fetch all at once
for i in 1 .. l_tab.last loop
-- process each row
end loop;
EXIT WHEN c_cursor%NOTFOUND;
END LOOP
CLOSE c_cursor;

Resources