Oracle 11g PL/SQL Diana Nodes Limit - oracle

I have a statement such as below but its padded out to do 1000 calls at a time. Anything over that throws a PLS-123 error Program Too Large Diana Nodes
begin
sp_myprocedure(....)
sp_myprocedure(....)
sp_myprocedure(....)
sp_myprocedure(....)
end
We are moving to 11g and I was wondering if this limitation could be increased to 2000 for example.
Thanks

"I have a statement such as below but its padded out to do 1000 calls
at a time"
This is a very bad programming strategy. Writing the same thing multiple times is a code smell. Anytime we find ourselves programming with cut'n'paste and then a bit of editing is a time when we should stop and ask ourselves, 'hmmm is there a better way to do this?'
"The parameters are different for each stored procedure call"
Yes, but the parameters have to come from somewhere. Presumably at the moment you are hard-coding them one thousand times. Yuck.
A better solution would be to store them in a table. Then you could write a simple loop. Like this:
for prec in ( select p1, p2 from my_parameters
order by id -- if ordering is important
)
loop
sp_myprocedure(prec.p1, prec.p2);
end loop;
Because you are storing the parameters in a table you can have as many calls to that proc as you like, and you are not bound by the Diana node limit.
True you will have to move your parameter values to a table, but it is not harder to maintain data in a table than it is to maintain hardcoded values in source code.

If you're just moving from 10g then I don't believe the limit has changed. So, if you're having problems now then you'll have them again in 11g. Take a look at this Ask Tom article. A general suggestion is to put your procedure in a package. Or, break it down into smaller blocks. If you're only getting the error when running the block which calls the procedure 1000 times and in the procedure on its own then I suggest you try as APC says and loop through it instead as this should reduce the number of nodes.

Related

generating "large" files through dbms_output

... before anyone marks this as a duplicate ... i have checked Is there any way to flush output from PL/SQL in Oracle?
and am looking for more information on how to actually do what that answer referenced ... so hold it.
I absolutely need to use DBMS_OUTPUT due to various reasons. UTL_FILE won't help.
For the benefit of this question, let's say i'm generating a flat file.
declare
function printerFunction(someArray) <---- this function swallows an array and dumps it to
is begin
for each element of some array ... dbms_output.put_line(element)
end printerFunction;
begin
for row in some_cursor loop
printerFunction(someArrayData);
end loop;
end
The aforementioned code block is essentially the jist of the matter... nothing special.
Dave Costa mentions something along the lines of "break up a large PL/SQL block into multiple smaller blocks" ... the question is how.
I understand I have a nested loop structure and this is most likely the reason behind the output buffer not flushing itself, but can't think of a way to keep some_cursor open or for all intents and purposes switch back and forth between two code blocks outside of that loop .
dbms_output.enable(null) is kind of a dumb idea in this case as well. Ideally i'd like to essentially flush out the stuff in the buffer to my sqlDeveloper scriptOutput window and move on with processing at a specific rate ... say every 10000 rows or so. is this even possible? ... I mean... the main begin [pl/sql code] end structure is essentially a code block itself.
... the db I'm working on is a production environment, and i can only use a limited, read-only set of commands. ... in other words ... SYS stuff is beyond my reach.
As Dave Costa explains in the answer you link to, if you want the client to display the information before all your processing is done, you'd need to break your code up into separate blocks that you could send separately to the database. The client has no way to read the data out of the dbms_output buffer until the database returns control to the client.
You wouldn't realistically keep a single cursor open across these separate calls. You'd open a new cursor each time. For example, if you had a block
begin
for cursor in (select * from ... order by ... offset x fetch next y rows only)
loop
<<do something>>
end loop;
end;
you could have that run for 10,000 rows (or whatever value of y you want), get the output in SQL Developer, then run another block to process the next y rows (or have x and y be variables defined in SQL Developer that you update in each iteration or create a package to save state in package variables but I assume that is not an option here). But you'd need to submit a new PL/SQL block to the database for each set of y rows that you wanted to process. And this query will get less efficient as you try to get later and later pages since it has to sort more and more data to get the results.
Practically, it is pretty unlikely that you'd really want to break your logic up like this. You'd almost always be better off submitting a single PL/SQL block and letting the client fetch the data once at the end. But using dbms_output to generate a file is pretty weird in the first place. You'd generally be better served letting SQL Developer generate the file. For example, if you return a sys_refcursor for
select *
from table(someArrayData)
or just run whatever query that was used to generate someArrayData, SQL Developer can happily save that data into a CSV file, and XLS(X) file, etc.
You can clear the buffer by consuming the messages manually. This snippet puts something in the buffer and then clears it. My client shows no output.
DECLARE
var_status integer := 0;
var_dummy varchar2(32767);
BEGIN
dbms_output.put_line('this is a test');
WHILE var_status = 0
LOOP
DBMS_OUTPUT.GET_LINE (line => var_dummy,
status => var_status);
END LOOP;
END;

Oracle: Return Large Dataset with Cursor in Procedure

I've seen lots of posts regarding the use of cursors in PL/SQL to return data to a calling application, but none of them touch on the issue I believe I'm having with this technique. I am fairly new to Oracle, but have extensive experience with MSSQL Server. In SQL Server, when building queries to be called by an application for returning data, I usually put the SELECT statement inside a stored proc with/without parameters, and let the stored proc execute the statement(s) and return the data automatically. I've learned that with PL/SQL, you must store the resulting dataset in a cursor and then consume the cursor.
We have a query that doesn't necessarily return huge amounts of rows (~5K - 10K rows), however the dataset is very wide as it's composed of 1400+ columns. Running the SQL query itself in SQL Developer returns results instantaneously. However, calling a procedure that opens a cursor for the same query takes 5+ minutes to finish.
CREATE OR REPLACE PROCEDURE PROCNAME(RESULTS OUT SYS_REFCURSOR)
AS
BEGIN
OPEN RESULTS FOR
<SELECT_query_with_1400+_columns>
...
END;
After doing some debugging to try to get to the root cause of the slowness, I'm leaning towards the cursor returning one row at a time very slowly. I can actually see this real-time by converting the proc code into a PL/SQL block and using DBMS_SQL.return_result(RESULTS) after the SELECT query. When running this, I can see each row show up in the Script output window in SQL Developer one at a time. If this is exactly how the cursor returns the data to the calling application, then I can definitely see how this is the bottleneck as it could take 5-10 minutes to finish returning all 5K-10K rows. If I remove columns from the SELECT query, the cursor displays all the rows much faster, so it does seem like the large amount of columns is an issue using a cursor.
Knowing that running the SQL query by itself returns instant results, how could I get this same performance out of a cursor? It doesn't seem like it's possible. Is the answer putting the embedded SQL in the application code and not using a procedure/cursor to return data in this scenario? We are using Oracle 12c in our environment.
Edit: Just want to address how I am testing performance using the regular SELECT query vs the PL/SQL block with cursor method:
SELECT (takes ~27 seconds to return ~6K rows):
SELECT <1400+_columns>
FROM <table_name>;
PL/SQL with cursor (takes ~5-10 minutes to return ~6K rows):
DECLARE RESULTS SYS_REFCURSOR;
BEGIN
OPEN RESULTS FOR
SELECT <1400+_columns>
FROM <table_name>;
DBMS_SQL.return_result(RESULTS);
END;
Some of the comments are referencing what happens in the console application once all the data is returned, but I am only speaking regarding the performance of the two methods described above within Oracle\SQL Developer. Hope this helps clarify the point I'm trying to convey.
You can run a SQL Monitor report for the two executions of the SQL; that will show you exactly where the time is being spent. I would also consider running the two approaches in separate snapshot intervals and checking into the output from an AWR Differences report and ADDM Compare Report; you'd probably be surprised at the amazing detail these comparison reports provide.
Also, even though > 255 columns in a table is a "no-no" according to Oracle as it will fragment your record across > 1 database blocks, thus increasing the IO time needed to retrieve the results, I suspect the differences in the two approaches that you are seeing is not an IO problem since in straight SQL you report fast result fetching all. Therefore, I suspect more of a memory problem. As you probably know, PL/SQL code will use the Program Global Area (PGA), so I would check the parameter pga_aggregate_target and bump it up to say 5 GB (just guessing). An ADDM report run for the interval when the code ran will tell you if the advisor recommends a change to that parameter.

Execute immediate fills up the library cache

I have a question regarding how queries executed through
'execute immediate' is treated in the library cache (We use Oracle 11).
Let's say I have a function like this:
FUNCTION get_meta_map_value (
getfield IN VARCHAR2,
searchfield IN VARCHAR2,
searchvalue IN VARCHAR2
) RETURN VARCHAR2 IS
v_outvalue VARCHAR2(32767);
sql_stmt VARCHAR2(2000) := 'SELECT '||getfield||' FROM field_mapping, metadata '||
'WHERE field_mapping.metadataid = metadata.metadataid AND rownum = 1 AND '||searchfield||' = :1';
BEGIN
EXECUTE IMMEDIATE sql_stmt INTO v_outvalue USING searchvalue;
...
The getfield and searchfield are in one installation always the same (but has other values in another installation, so that is why we use dynamic sql)
So this leaves us with an sql that only differs in the searchvalue (which is a parameter).
This function is called in a loop that executes x times, from inside another stored procedure.
The stored procedure is executed y times during the connection life time, through ODBC connection.
And there are z connections, but each of them uses the same database login.
Now let us also assume that the searchvalue changes b times during one loop.
Question 1:
When calculating how many copies of the sql will be kept in the library cache,
can we disregard the different values the searchvalue can have (b), as the value is sent as a parameter to execute immediate?
Question 2:
Will the loop cause a hard parse of the query x times (query will be created in library cache x times), or can Oracle reuse the query?
(We assume that the searchvalue is the same for all calls in this question here, for simplicity)
Question 3:
Does the y (number of times the stored procedure is called from odbc during the lifetime of one connection)
also multiply the amount of copies of the query that are kept in library cache?
Question 4:
Does the z (number of simultaneous connections with same db login)
multiply the amount of copies of the query that are kept in library cache?
Main question:
What behaviour should I expect here?
Is the behaviour configurable?
The cause for this question is that we have had this code is production for 4 years, and now one of our customer gets back to us and says "This query fills our whole SGA, and Oracle says it's your fault".
The number of different combinations of getfield and searchfield should determine how many "copies" there will be. I use teh word "copies" cautiously because Oracke will treat each variation as distinct. Since you are using a bind variable for searchvalue so however many values you have for this will not add to the query count.
In short, it looks like your code is OK.
Number of connections should not increase the hard parses.
Ask for a AWR report to see exactly how many of these queries are in the SGA, and how many hard parses are being triggered.
I will disagree that the number of connections will not increase the hard parse count for the posted code because the last I knew dynamic SQL cannot be shared between sessions. Since the generated SQL uses a bind variable it should generate a reusable statement by the session, but it will not be sharable between user sessions. As a general rule dynamic SQL should be used only for infrequently executed statements. You may want to refer to the following:
- -
Designing applications for performance and scalability An Oracle White Paper July 2005
https://www.oracle.com/technetwork/database/performance/designing-applications-for-performa-131870.pdf
- -
enter code here

Postgres Function - SQLs in Loop really a prepared Statement?

Right now I am doing for my company a migration from Firebird 2.5 to Postgres 9.4 and I also converted Stored Procedures from Firebird into Functions to Postgres...
Now I figured out that the performance is quite slow, but only if there are loops in which I execute more SQLs whith changing parameters.
So for example it looks like this (I simplified it to the necessary things)
CREATE OR REPLACE FUNCTION TEST
(TEST_ID BigInt) returns TABLE(NAME VARCHAR)
AS $$
declare _tmp bigint;
begin
for _tmp in select id from test
loop
-- Shouldn't the following SQL work as a Prepared Statement?
for name in select label
from test2
where id = _tmp
loop
return next;
end loop;
end loop;
end; $$
LANGUAGE plpgsql;
So if I compare the the time it takes to execute just the select inside the loop with Postgres and Firebird then Postgres is usually a bit faster than Firebird. But if the loop runs like 100 or 1000 or 10000 times than the time of the Firebird Stored Procedure is much faster. When I compare the times in Postgres it seemes like if the loop runs 10 times it takes 10 times longer then 1 row and if it runs 1000 times it takes 1000 times longer.... That should not be if it its reallly a Prepared Statement, right?
I checked also other issues like setting the memory settings high, leaving the statement "return next" out because I read that can cause a performance problem also....
It has also nothing to do with the "returns table" expression. If I leave that out it takes also the same time..
Nothing worked so far...
Of course this simple example could be solved also with one SQL, but the functions I migrated are much more complicated and I don't want to change the whole functions into something new (if possible)...
Am I missing something?
PL/pgSQL reuses prepared queries across function invocations; you only incur preparation overhead once per session. So unless you've been reconnecting between each test, the linear execution times are expected.
But it may also reuse execution plans, and sometimes this does not work to your advantage. Running your query in an EXECUTE statement can give better performance, despite the overhead of repreparing it each time.
See the PL/pgSQL documentation for more detail.
Finally got it... It was an index problem but it makes not complete sense to me...
Because if I executed the SQLs outside the function they were even faster than Firebird with Indizes. Now they are outside the functions in Postgres even twice as fast as before and the functions works now also really fast. Also faster as in Firebird...
The reason why I also was not considering this is because in Firebird the Foreign Keys also works as indizes. I expected would be the same in Postgres but it's not...
Should have really considered that earlier also because of the comments of Frank and Pavel.
Thanks to all anyways...

Is it a bad practice to use EXIT WHEN instruction when looping through CURSORs in Oracle?

It may sound like a silly question, but I hope I'll make myself clear enough.
When talking about Spaghetti Code, the basis of it is the
usage of GOTOs. I had a peer that was used to say if I put a breakpoint at the end of the code and this breakpoint isn't reached everytime, something is wrong.
Nevertheless, is a common (and I'd say, a rule) to use EXIT WHEN
structures within Oracle packages (usually followed by a %NOTFOUND
test).
Taking for granted that using EXIT breaks the programming flow, isn't something that doesn't match between 1 and 2?
Is everyone programming in PL/SQL following a bad practice? Does PL/SQL don't follow this specific pattern for conditionals?
Is there any performance reason under Oracle's hood to use such statements?
Apologies if this question has been already asked, I couldn't find anything similar around.
Yes, many people are following a bad practice.
Bad Style
I agree with #Osy that OPEN/FETCH/CLOSE adds completely unnecessary code. I would go even further, and say that you should almost never use CURSOR.
First of all, you normally want to do as much as possible in plain SQL. If you need to use PL/SQL, use an implicit cursor. It will save you a line of code and will help you keep related logic closer together.
I'm a strong believer in keeping individual units of code as small as possible. At first glance, it seems like a CURSOR can help you do this. You can define your SQL up top in one place, and then do the PL/SQL looping later.
But in reality, that extra layer of indirection is almost never worth it. Sometimes a lot of logic is in SQL, and sometimes a lot of logic is in PL/SQL. But in practice, it rarely makes sense to put a lot of complex logic in both. Your code usually ends up looking like
one of these:
for records in (<simple SQL>) loop
<complex PL/SQL>
end loop;
or:
for records in
(
<complex SQL>
) loop
<simple PL/SQL>;
end loop;
Either way, one of your code sections will be very small. The complexity of separating those two sections of code is greater than the complexity of a larger, single section of code. (But that is obviously my opinion.)
Bad Performance
There are significant performance implications with using OPEN/FETCH/CLOSE. That method is much slower than using a cursor for loop or an implicit cursor.
The compiler can automatically use bulk collect in some for loops. But, to quote from the Oracle presentation "PL/SQL Performance—Debunking the Myths", page 122:
Don’t throw this chance away by using the open, fetch loop, close form
Here's a quick example:
--Sample data
create table t(a number, b number);
insert into t select level, level from dual connect by level <= 100000;
commit;
--OPEN/FETCH/CLOSE
--1.5 seconds
declare
cursor test_cur is
select a, b from t;
test_rec test_cur%rowtype;
counter number;
begin
open test_cur;
loop
fetch test_cur into test_rec;
exit when test_cur%notfound;
counter := counter + 1;
end loop;
close test_cur;
end;
/
--Implicit cursor
--0.2 seconds
declare
counter number;
begin
for test_rec in (select a, b from t) loop
counter := counter + 1;
end loop;
end;
/
It´is very recommendable to keep it simple your code, so I can tell you what PL/SQL guru says about it:
NOTE : In some cases is not recommendable use of CURSOR-FOR-LOOP. You can consider one more intelligent manner of choose single SELECT-INTO or BULK COLLECT according of your needs.
Source : On Cursor FOR Loops, Oracle Magazine By Steven Feuerstein
(Reference: Feuerstein, TOP TWENTY PL/SQL TIPS AND TECHNIQUES):
Loops
12. Take advantage of the cursor FOR loop.
The cursor FOR loop is
one of my favorite PL/SQL constructs. It leverages fully the tight and
effective integration of the procedural aspects of the language with
the power of the SQL database language. It reduces the volume of code
you need to write to fetch data from a cursor. It greatly lessens the
chance of introducing loop errors in your programming - and loops are
one of the more error-prone parts of a program. Does this loop sound
too good to be true? Well, it isn’t - it’s all true!
Suppose I need to update the bills for all pets staying in my pet
hotel, the Share-a-Din-Din Inn. The example below contains an
anonymous block that uses a cursor, occupancy_cur, to select the room
number and pet ID number for all occupants at the Inn. The procedure
update_bill adds any new changes to that pet’s room charges.
DECLARE
CURSOR occupancy_cur IS
SELECT pet_id, room_number
FROM occupancy
WHERE occupied_dt = SYSDATE;
occupancy_rec occupancy_cur%ROWTYPE;
BEGIN
OPEN occupancy_cur;
LOOP
FETCH occupancy_cur
INTO occupancy_rec;
EXIT WHEN occupancy_cur%NOTFOUND;
update_bill
(occupancy_rec.pet_id,
occupancy_rec.room_number);
END LOOP;
CLOSE occupancy_cur;
END;
This code leaves nothing to the imagination. In addition to defining
the cursor (line 2), you must explicitly declare the record for the
cursor (line 5), open the cursor (line 7), start up an infinite loop,
fetch a row from the cursor set into the record (line 9), check for an
end-ofdata condition with the cursor attribute (line 10), and finally
perform the update. When you are all done, you have to remember to
close the cursor (line 14). If I convert this PL/SQL block to use a
cursor FOR loop, then I all I have is:
DECLARE
CURSOR occupancy_cur IS
SELECT pet_id, room_number
FROM occupancy WHERE occupied_dt =
SYSDATE;
BEGIN
FOR occupancy_rec IN occupancy_cur
LOOP
update_bill (occupancy_rec.pet_id,
occupancy_rec.room_number);
END LOOP;
END;
Here you see the beautiful simplicity of the cursor FOR loop! Gone is
the declaration of the record. Gone are the OPEN, FETCH, and CLOSE
statements. Gone is need to check the %FOUND attribute. Gone are the
worries of getting everything right. Instead, you say to PL/SQL, in
effect:: ÒYou and I both know that I want each row and I want to dump
that row into a record that matches the cursor. Take care of that for
me, will you?" And PL/SQL does take care of it, just the way any
modern programming language integrated with SQL should.

Resources