Oracle unused cursors create overhead? - performance

I am working on an application that have lot of cursor and many of them are just defined in a package header and not used in package body, so does this unused cursor creates an overhead?

A declared cursor but unused cursor will create no overhead but an open but unused cursor might, a bit.
An open cursor is stored in the private SQL area of the PGA. This "holds information about a parsed SQL statement and other session-specific information for processing.". You can find the amount of PGA you have by querying V$PGASTAT.
It's not 100% clear from Oracle's Memory Architecture documentation whether opened but unused cursors store anything in the PGA. The section on the persistent area of the private SQL area of the PGA would insinuate that this is only created if you're binding any variables to your cursor; but, as the state of the cursor must be stored in order for the DB to know that it's open I'm assuming that some memory is used.
If a single open cursor is negatively impacting your performance I'd be horrified. This would be an indication that you've massively underestimated the size of the PGA and SGA (execution plans are stored here) that you need.
However, this strategy can backfire massively as the number of open cursors is limited by the open_cursors parameter, which you can find in V$PARAMETER. This is an absolute upper limit on the number of open cursors you can have open. If you hit this limit you'll get ORA-01000.
This means that you should not open cursors that you're not going to use.
However it's also worth noting this particular Ask Tom question/answer, though it's from 2004.
3- If the open_cursors are increased then what will be the performance impact on the db > server and the memory usage?
...
Followup April 1, 2004 - 10am UTC:
...
...3) if you are not hitting ora-1000, it will change nothing (since you are not using the cursors you currently have)

Related

Auto Optimizer Stats Collection Job Causing Oracle RDS Database to Restart

We have a Oracle 19C database (19.0.0.0.ru-2021-04.rur-2021-04.r1) on AWS RDS which is hosted on an 4 CPU 32 GB RAM instance. The size of the database is not big (35 GB) and the PGA Aggregate Limit is 8GB & Target is 4GB. Whenever the scheduled internal Oracle Auto Optimizer Stats Collection Job (ORA$AT_OS_OPT_SY_nnn) runs then it consumes substantially high PGA memory (approx 7GB) and sometimes this makes database unstable and AWS loses communication with the RDS instance so it restarts the database.
We thought this may be linked to existing Oracle bug 30846782 (19C+: Fast/Excessive PGA growth when using DBMS_STATS.GATHER_TABLE_STATS) but Oracle & AWS had fixed it in the current 19C version we are using. There are no application level operations that consume this much PGA and the database restart have always happened when the Auto Optimizer Stats Collection Job was running. There are couple of more databases, which are on same version, where same pattern was observed and the database was restarted by AWS. We have disabled the job now on those databases to avoid further occurrence of this issue however we want to run this job as disabling it may cause old stats being available in the database.
Any pointers on how to tackle this issue?
I found the same issue in my AWS RDS Oracle 18c and 19c instances, even though I am not in the same patch level as you.
In my case, I applied this workaround and it worked.
SQL> alter system set "_fix_control"='20424684:OFF' scope=both;
However, before applying this change, I strongly suggest that you test it on your non production environments, and if you can, try to consult with Oracle Support. Dealing with hidden parameters might lead to unexpected side effects, so apply it at your own risk.
Instead of completely abandoning automatic statistics gathering, try find any specific objects that are causing the problem. If only a small number of tables are responsible for a large amount of statistics gathering, you can manually analyze those tables or change their preferences.
First, use the below SQL to see which objects are causing the most statistics gathering. According to the test case in bug 30846782, the problem seems to be only related to the number of times DBMS_STATS is called.
select *
from dba_optstat_operations
order by start_time desc;
In addition, you may be able to find specific SQL statements or sessions that generate a lot of PGA memory with the below query. (However, if the database restarts, it's possible that AWR won't save the recorded values.)
select username, event, sql_id, pga_allocated/1024/1024/1024 pga_allocated_gb, gv$active_session_history.*
from gv$active_session_history
join dba_users on gv$active_session_history.user_id = dba_users.user_id
where pga_allocated/1024/1024/1024 >= 1
order by sample_time desc;
If the problem is only related to a small number of tables with a large number of partitions, you can manually gather the stats on just that table in a separate session. Once the stats are gathered, the table won't be analyzed again until about 10% of the data is changed.
begin
dbms_stats.gather_table_stats(user, 'PGA_STATS_TEST');
end;
/
It's not uncommon for a database to spend a long time gathering statistics, but it is uncommon for a database to constantly analyze thousands of objects. Running into this bug implies there is something unusual about your database - are you constantly dropping and creating objects, or do you have a large number of objects that have 10% of their data modified every day? You may need to add a manual gather step to a few of your processes.
Turning off the automatic statistics job entirely will eventually cause many performance problems. Even if you can't add manual gathering steps, you may still want to keep the job enabled. For example, if tables are being analyzed too frequently, you may want to increase the table preference for the "STALE_PERCENT" threshold from 10% to 20%:
begin
dbms_stats.set_table_prefs
(
ownname => user,
tabname => 'PGA_STATS_TEST',
pname => 'STALE_PERCENT',
pvalue => '20'
);
end;
/

Oracle: Return Large Dataset with Cursor in Procedure

I've seen lots of posts regarding the use of cursors in PL/SQL to return data to a calling application, but none of them touch on the issue I believe I'm having with this technique. I am fairly new to Oracle, but have extensive experience with MSSQL Server. In SQL Server, when building queries to be called by an application for returning data, I usually put the SELECT statement inside a stored proc with/without parameters, and let the stored proc execute the statement(s) and return the data automatically. I've learned that with PL/SQL, you must store the resulting dataset in a cursor and then consume the cursor.
We have a query that doesn't necessarily return huge amounts of rows (~5K - 10K rows), however the dataset is very wide as it's composed of 1400+ columns. Running the SQL query itself in SQL Developer returns results instantaneously. However, calling a procedure that opens a cursor for the same query takes 5+ minutes to finish.
CREATE OR REPLACE PROCEDURE PROCNAME(RESULTS OUT SYS_REFCURSOR)
AS
BEGIN
OPEN RESULTS FOR
<SELECT_query_with_1400+_columns>
...
END;
After doing some debugging to try to get to the root cause of the slowness, I'm leaning towards the cursor returning one row at a time very slowly. I can actually see this real-time by converting the proc code into a PL/SQL block and using DBMS_SQL.return_result(RESULTS) after the SELECT query. When running this, I can see each row show up in the Script output window in SQL Developer one at a time. If this is exactly how the cursor returns the data to the calling application, then I can definitely see how this is the bottleneck as it could take 5-10 minutes to finish returning all 5K-10K rows. If I remove columns from the SELECT query, the cursor displays all the rows much faster, so it does seem like the large amount of columns is an issue using a cursor.
Knowing that running the SQL query by itself returns instant results, how could I get this same performance out of a cursor? It doesn't seem like it's possible. Is the answer putting the embedded SQL in the application code and not using a procedure/cursor to return data in this scenario? We are using Oracle 12c in our environment.
Edit: Just want to address how I am testing performance using the regular SELECT query vs the PL/SQL block with cursor method:
SELECT (takes ~27 seconds to return ~6K rows):
SELECT <1400+_columns>
FROM <table_name>;
PL/SQL with cursor (takes ~5-10 minutes to return ~6K rows):
DECLARE RESULTS SYS_REFCURSOR;
BEGIN
OPEN RESULTS FOR
SELECT <1400+_columns>
FROM <table_name>;
DBMS_SQL.return_result(RESULTS);
END;
Some of the comments are referencing what happens in the console application once all the data is returned, but I am only speaking regarding the performance of the two methods described above within Oracle\SQL Developer. Hope this helps clarify the point I'm trying to convey.
You can run a SQL Monitor report for the two executions of the SQL; that will show you exactly where the time is being spent. I would also consider running the two approaches in separate snapshot intervals and checking into the output from an AWR Differences report and ADDM Compare Report; you'd probably be surprised at the amazing detail these comparison reports provide.
Also, even though > 255 columns in a table is a "no-no" according to Oracle as it will fragment your record across > 1 database blocks, thus increasing the IO time needed to retrieve the results, I suspect the differences in the two approaches that you are seeing is not an IO problem since in straight SQL you report fast result fetching all. Therefore, I suspect more of a memory problem. As you probably know, PL/SQL code will use the Program Global Area (PGA), so I would check the parameter pga_aggregate_target and bump it up to say 5 GB (just guessing). An ADDM report run for the interval when the code ran will tell you if the advisor recommends a change to that parameter.

How to workaround a memory leak in Oracle

I have jobs in Oracle that can run for hours doing a lot of calculations involving (but not limited to) XmlTransform. I have noticed that the PGA memory is increasing (and the performance degrading) gradually until at some point the job fails with an out of memory (PGA) message. We have applied some fixes, but they don't seem to solve the issue.
Stopping the jobs and restarting them, solves my issue, the performance is good again and the memory is low...
All the code is written in PL/SQL and SQL.
Question:
As I want to solve this as soon as possible, I was wondering how I can workaround this type of issue in Oracle.
My main thinking goes to somehow:
restarting the job after some time (possibly the most simple solution) using Advanced Queuing
restart the current session?
executing some code syncronously in another session, maybe another job.
Oracle 12.1.0.2
EDIT: As asked here's sample code with XMLTransform:
function i_Convert_Xml_To_Clob (p_Zoek_Result_Type_Id in Zoek_Result_Type.Zoek_Result_Type_Id%type,
p_Xml in xmltype,
p_Xml_Transformation in xmltype) return clob is
mResult clob;
begin
if p_Xml_Transformation is not null then
select Xmltransform (p_Xml, p_Xml_Transformation).getclobval()
into mResult
from Dual;
elsif p_Xml is not null then
mResult := p_Xml.getclobval();
else
mResult := null;
end if;
return mResult;
end i_Convert_Xml_To_Clob;
Can you or a DBA monitor temp lob usage from another session using V$TEMPORARY_LOBS. If the number of lobs is increasing then the session is not freeing them correctly and this will lead increasing PGA usage (Note this is not a leak).
The most common scenario is when processing a statement that returns one or more temporary lobs, for instance XMLTRANSMFORM().getClobVal().
It is not uncommon for (Java ?) developers to forget that a TEMP LOB is a SESSION level object and the resources assoicated with it will not be freed as a result of the client handle or reference going out of scope. Eg If you get a TEMP lob into a JAVA Clob object you can not rely on garbage collection to clean up the LOB. You must explicitly free the lob before overwriting it with the next lob or the LOB resources will be held by the server until the session ends.
Since we don't have sample code we cannot definitely state this is what is happening in your case.

What if I do not Explicitly close the sys_refcursor in oracle?

What if I do not Explicitly close the sys_refcursor in oracle? will it give a open cursor issue, and results in slow speed of application??
It should be discarded / automatically closed once it goes 'out of scope'.
However, what 'out of scope' means can vary depending on the client technology (JDBC, PL/SQL, etc). Within PL/SQL, for instance, it can depend on whether the cursor is held as a package variable or local variable.
As Dave's answer suggests, each open cursor will count against the total limit - eventually you will hit this limit and get an application error.
I would say that best practice is to explicitly close when you are done.
As long as the cursor is open it will count against the limit defined by OPEN_CURSORS, so it could cause issues if you repeatedly open cursors and don't close them.
It will also continue to consume some memory until it is closed. I don't think it's likely to degrade performance significantly though.

SQLDeveloper using over 100MB of PGA

Perhaps this is normal, but in my Oracle 11g database I am seeing programmers using Oracle's SQL Developer regularly consume more than 100MB of combined UGA and PGA memory. I'd like to know if this is normal and what can be done about it. Our database is on the 32 bit version of Windows 2008, so memory limitations are becoming an increasing concern. I am using the following query to show the memory usage:
SELECT e.SID, e.username, e.status, b.PGA_MEMORY
FROM v$session e
LEFT JOIN
(select y.SID, y.value pga,
TO_CHAR(ROUND(y.value/1024/1024),99999999) || ' MB' PGA_MEMORY
from v$sesstat y, v$statname z
where y.STATISTIC# = z.STATISTIC# and NAME = 'session pga memory') b
ON e.sid=b.sid
WHERE (PGA)/1024/1024 > 20
ORDER BY 4 DESC;
It seems that the resource usage goes up any time a table is opened in SQLDeveloper, but even when it is closed the memory does not go away. The problem is worse if the table is sorted while it was open as that seems to use even more memory. I understand how this would use memory while it is sorting, and perhaps even while it is still open, but to use memory after it is closed seems wrong to me. Can anyone confirm this?
Update:
I discovered that my numbers were off due to not understanding that the UGA is stored in the PGA under dedicated server mode. This makes the numbers lower than they were, but the problem still remains that SQL Developer seems to use excessive PGA.
Perhaps SQL Developer doesn't close the cursors it had opened.
So if you run a query which sorts a million rows and SQL Developer fetches only first 20 rows from there, it needs to keep the cursor open should you want to scroll down and fetch more.
So, it needs to keep some of the PGA memory associated with the cursor's sort area still allocated (it's called retained sort area) as long as the cursor is open and hasn't reached EOF (end-of-fetch).
Pick a session and run:
select sql_id,operation_type,actual_mem_used,max_mem_used,tempseg_size
from v$sql_workarea_active
where sid = &SID_OF_INTEREST
This should show whether some cursors are still kept open with their memory...
Are you using Automatic Memory Management? If yes, I would not worry about the PGA memory used.
See docs:
Automatic Memory Management: http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/memory003.htm#ADMIN11011
MEMORY_TARGET: http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/initparams133.htm
Is there a reason you are using 32 bit Oracle? Most recent hardware supports 64 bit.
Oracle, especially with AMM, will use every bit of memory on the machine you give it. If it doesn't have a reason to de-allocate memory it will not do so. It is the same with storage space: if you delete 20 GB of user data that space is not returned to the OS. Oracle will hold on to it unless you explicitly compact the tablespaces.
I believe a simple test should relieve your concerns. If it's 32 bit, and each SQL Developer session is using 100MB+ of RAM, then you'd only need a few hundred sessions open to cause a low-memory problem...if there really is one.

Resources