How to workaround a memory leak in Oracle - oracle

I have jobs in Oracle that can run for hours doing a lot of calculations involving (but not limited to) XmlTransform. I have noticed that the PGA memory is increasing (and the performance degrading) gradually until at some point the job fails with an out of memory (PGA) message. We have applied some fixes, but they don't seem to solve the issue.
Stopping the jobs and restarting them, solves my issue, the performance is good again and the memory is low...
All the code is written in PL/SQL and SQL.
Question:
As I want to solve this as soon as possible, I was wondering how I can workaround this type of issue in Oracle.
My main thinking goes to somehow:
restarting the job after some time (possibly the most simple solution) using Advanced Queuing
restart the current session?
executing some code syncronously in another session, maybe another job.
Oracle 12.1.0.2
EDIT: As asked here's sample code with XMLTransform:
function i_Convert_Xml_To_Clob (p_Zoek_Result_Type_Id in Zoek_Result_Type.Zoek_Result_Type_Id%type,
p_Xml in xmltype,
p_Xml_Transformation in xmltype) return clob is
mResult clob;
begin
if p_Xml_Transformation is not null then
select Xmltransform (p_Xml, p_Xml_Transformation).getclobval()
into mResult
from Dual;
elsif p_Xml is not null then
mResult := p_Xml.getclobval();
else
mResult := null;
end if;
return mResult;
end i_Convert_Xml_To_Clob;

Can you or a DBA monitor temp lob usage from another session using V$TEMPORARY_LOBS. If the number of lobs is increasing then the session is not freeing them correctly and this will lead increasing PGA usage (Note this is not a leak).
The most common scenario is when processing a statement that returns one or more temporary lobs, for instance XMLTRANSMFORM().getClobVal().
It is not uncommon for (Java ?) developers to forget that a TEMP LOB is a SESSION level object and the resources assoicated with it will not be freed as a result of the client handle or reference going out of scope. Eg If you get a TEMP lob into a JAVA Clob object you can not rely on garbage collection to clean up the LOB. You must explicitly free the lob before overwriting it with the next lob or the LOB resources will be held by the server until the session ends.
Since we don't have sample code we cannot definitely state this is what is happening in your case.

Related

PLSQL function returns a clob but it's unclear if it is being freed implicitly or automaticly

I am very new to PLSQL.
However, I am working with the CLOB datatype. I've heard that it is easy to create memory leaks with CLOBs?
Here is a function I created that simply selects data from a table and puts it into the CLOB. Is there anything else I need to do to ensure my memory is being managed properly?
CREATE OR REPLACE FUNCTION getLastGPoverPeriod
RETURN clob IS
stuff clob;
BEGIN
SELECT NAME INTO stuff
FROM TEMP;
dbms_output.put_line(stuff);
RETURN stuff;
END;
/
Your code is fine. Resources used by an anonymous block are automatically cleaned out when the anonymous block is complete (except for things like uncommitted transactions). I've never seen memory leaks caused by CLOBs.
Although there are some potential space issues with large CLOBs. CLOBs can be stored in temporary tablespace, which is a finite resource and not always sized properly. . But reading a single CLOB at a time shouldn't be a problem, unless it's a multi-gigabyte file.

Oracle Temporary Table to convert a Long Raw to Blob

Questions have been asked in the past that seems to handle pieces of my full question, but I'm not finding a totally good answer.
Here is the situation:
I'm importing data from an old, but operational and production, Oracle server.
One of the columns is created as LONG RAW.
I will not be able to convert the table to a BLOB.
I would like to use a global temporary table to pull out data each time I call to the server.
This feels like a good answer, from here: How to create a temporary table in Oracle
CREATE GLOBAL TEMPORARY TABLE newtable
ON COMMIT PRESERVE ROWS
AS SELECT
MYID, TO_LOB("LONGRAWDATA") "BLOBDATA"
FROM oldtable WHERE .....;
I do not want the table hanging around, and I'd only do a chunk of rows at a time, to pull out the old table in pieces, each time killing the table. Is it acceptable behavior to do the CREATE, then do the SELECT, then DROP?
Thanks..
--- EDIT ---
Just as a follow up, I decided to take an even different approach to this.
Branching the strong-oracle package, I was able to do what I originally hoped to do, which was to pull the data directly from the table without doing a conversion.
Here is the issue I've posted. If I am allowed to publish my code to a branch, I'll post a follow up here for completeness.
Oracle ODBC Driver Release 11.2.0.1.0 says that Prefetch for LONG RAW data types is supported, which is true.
One caveat is that LONG RAW can technically be up to 2GB in size. I had to set a hard max size of 10MB in the code, which is adequate for my use, so far at least. This probably could be a variable sent in to the connection.
This fix is a bit off original topic now however, but it might be useful to someone else.
With Oracle GTTs, it is not be necessary to drop and create each time, and you don't need to worry about data "hanging around." In fact, it's inadvisable to drop and re-create. The structure itself persists, but the data in it does not. The data only persists within each session. You can test this by opening up two separate clients, loading data with one, and you will notice it's not there in the second client.
In effect, each time you open a session, it's like you are reading a completely different table, which was just truncated.
If you want to empty the table within your stored procedure, you can always truncate it. Within a stored proc, you will need to execute immediate if you do this.
This is really handy, but it also can make debugging a bear if you are implementing GTTs through code.
Out of curiosity, why a chunk at a time and not the entire dataset? What kind of data volumes are you talking about?
-- EDIT --
Per our comments conversation, this is very raw and untested, but I hope it will give you an idea what I mean:
CREATE OR REPLACE PROCEDURE LOAD_DATA()
AS
TOTAL_ROWS number;
TABLE_ROWS number := 1;
ROWS_AT_A_TIME number := 100;
BEGIN
select count (*)
into TOTAL_ROWS
from oldtable;
WHILE TABLE_ROWS <= TOTAL_ROWS
LOOP
execute immediate 'truncate table MY_TEMP_TABLE';
insert into MY_TEMP_TABLE
SELECT
MYID, TO_LOB(LONGRAWDATA) as BLOBDATA
FROM oldtable
WHERE ROWNUM between TABLE_ROWS and TABLE_ROWS + ROWS_AT_A_TIME;
insert into MY_REMOTE_TABLE#MY_REMOTE_SERVER
select * from MY_TEMP_TABLE;
commit;
TABLE_ROWS := TABLE_ROWS + ROWS_AT_A_TIME;
END LOOP;
commit;
end LOAD_DATA;

Oracle unused cursors create overhead?

I am working on an application that have lot of cursor and many of them are just defined in a package header and not used in package body, so does this unused cursor creates an overhead?
A declared cursor but unused cursor will create no overhead but an open but unused cursor might, a bit.
An open cursor is stored in the private SQL area of the PGA. This "holds information about a parsed SQL statement and other session-specific information for processing.". You can find the amount of PGA you have by querying V$PGASTAT.
It's not 100% clear from Oracle's Memory Architecture documentation whether opened but unused cursors store anything in the PGA. The section on the persistent area of the private SQL area of the PGA would insinuate that this is only created if you're binding any variables to your cursor; but, as the state of the cursor must be stored in order for the DB to know that it's open I'm assuming that some memory is used.
If a single open cursor is negatively impacting your performance I'd be horrified. This would be an indication that you've massively underestimated the size of the PGA and SGA (execution plans are stored here) that you need.
However, this strategy can backfire massively as the number of open cursors is limited by the open_cursors parameter, which you can find in V$PARAMETER. This is an absolute upper limit on the number of open cursors you can have open. If you hit this limit you'll get ORA-01000.
This means that you should not open cursors that you're not going to use.
However it's also worth noting this particular Ask Tom question/answer, though it's from 2004.
3- If the open_cursors are increased then what will be the performance impact on the db > server and the memory usage?
...
Followup April 1, 2004 - 10am UTC:
...
...3) if you are not hitting ora-1000, it will change nothing (since you are not using the cursors you currently have)

What if I do not Explicitly close the sys_refcursor in oracle?

What if I do not Explicitly close the sys_refcursor in oracle? will it give a open cursor issue, and results in slow speed of application??
It should be discarded / automatically closed once it goes 'out of scope'.
However, what 'out of scope' means can vary depending on the client technology (JDBC, PL/SQL, etc). Within PL/SQL, for instance, it can depend on whether the cursor is held as a package variable or local variable.
As Dave's answer suggests, each open cursor will count against the total limit - eventually you will hit this limit and get an application error.
I would say that best practice is to explicitly close when you are done.
As long as the cursor is open it will count against the limit defined by OPEN_CURSORS, so it could cause issues if you repeatedly open cursors and don't close them.
It will also continue to consume some memory until it is closed. I don't think it's likely to degrade performance significantly though.

SQLDeveloper using over 100MB of PGA

Perhaps this is normal, but in my Oracle 11g database I am seeing programmers using Oracle's SQL Developer regularly consume more than 100MB of combined UGA and PGA memory. I'd like to know if this is normal and what can be done about it. Our database is on the 32 bit version of Windows 2008, so memory limitations are becoming an increasing concern. I am using the following query to show the memory usage:
SELECT e.SID, e.username, e.status, b.PGA_MEMORY
FROM v$session e
LEFT JOIN
(select y.SID, y.value pga,
TO_CHAR(ROUND(y.value/1024/1024),99999999) || ' MB' PGA_MEMORY
from v$sesstat y, v$statname z
where y.STATISTIC# = z.STATISTIC# and NAME = 'session pga memory') b
ON e.sid=b.sid
WHERE (PGA)/1024/1024 > 20
ORDER BY 4 DESC;
It seems that the resource usage goes up any time a table is opened in SQLDeveloper, but even when it is closed the memory does not go away. The problem is worse if the table is sorted while it was open as that seems to use even more memory. I understand how this would use memory while it is sorting, and perhaps even while it is still open, but to use memory after it is closed seems wrong to me. Can anyone confirm this?
Update:
I discovered that my numbers were off due to not understanding that the UGA is stored in the PGA under dedicated server mode. This makes the numbers lower than they were, but the problem still remains that SQL Developer seems to use excessive PGA.
Perhaps SQL Developer doesn't close the cursors it had opened.
So if you run a query which sorts a million rows and SQL Developer fetches only first 20 rows from there, it needs to keep the cursor open should you want to scroll down and fetch more.
So, it needs to keep some of the PGA memory associated with the cursor's sort area still allocated (it's called retained sort area) as long as the cursor is open and hasn't reached EOF (end-of-fetch).
Pick a session and run:
select sql_id,operation_type,actual_mem_used,max_mem_used,tempseg_size
from v$sql_workarea_active
where sid = &SID_OF_INTEREST
This should show whether some cursors are still kept open with their memory...
Are you using Automatic Memory Management? If yes, I would not worry about the PGA memory used.
See docs:
Automatic Memory Management: http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/memory003.htm#ADMIN11011
MEMORY_TARGET: http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/initparams133.htm
Is there a reason you are using 32 bit Oracle? Most recent hardware supports 64 bit.
Oracle, especially with AMM, will use every bit of memory on the machine you give it. If it doesn't have a reason to de-allocate memory it will not do so. It is the same with storage space: if you delete 20 GB of user data that space is not returned to the OS. Oracle will hold on to it unless you explicitly compact the tablespaces.
I believe a simple test should relieve your concerns. If it's 32 bit, and each SQL Developer session is using 100MB+ of RAM, then you'd only need a few hundred sessions open to cause a low-memory problem...if there really is one.

Resources