Where do we store the global variables which are declared in the package specification in PLSQL - oracle

In PLSQL where do we store the global variables declared in the package specification

Oracle stores global variables in memory structures that are part of the Program Global Area. You can read about the PGA in the Memory Architecture chapter of the Oracle Concepts Guide.
Variables are only accessible through PL/SQL, we can't access them through data dictionary views.
Session variables can only fit in PGA, they cannot spill to disk, so we may have to be careful about loading too much data. We need to avoid storing large tables in variables, which we can often do by processing cursors with FOR loops or using a LIMIT clause.
For example, the following code loads a lot of data into a simple collection.
--Load 20,000 large strings. Takes about 10 seconds.
declare
type string_nt is table of varchar2(4000);
v_strings string_nt := string_nt();
begin
for i in 1 .. 20000 loop
v_strings.extend;
v_strings(v_strings.count) := lpad('A', 4000, 'A');
end loop;
null;
end;
/
We can't view the variable data in the data dictionary, but we can check the size of the data through the data dictionary. In this case, it takes about 105MB of memory to store 80MB of raw data:
--Maximum session PGA memory.
select value/1024/1024 mb
from v$sesstat
join v$statname
on v$sesstat.statistic# = v$statname.statistic#
where v$statname.name = 'session pga memory max'
order by value desc;
(My answer is based on the assumption that you're asking because you're worried about storing lots of data. If my assumption is wrong, please update the question to explain precisely what you're looking for.)

Related

PLSQL function returns a clob but it's unclear if it is being freed implicitly or automaticly

I am very new to PLSQL.
However, I am working with the CLOB datatype. I've heard that it is easy to create memory leaks with CLOBs?
Here is a function I created that simply selects data from a table and puts it into the CLOB. Is there anything else I need to do to ensure my memory is being managed properly?
CREATE OR REPLACE FUNCTION getLastGPoverPeriod
RETURN clob IS
stuff clob;
BEGIN
SELECT NAME INTO stuff
FROM TEMP;
dbms_output.put_line(stuff);
RETURN stuff;
END;
/
Your code is fine. Resources used by an anonymous block are automatically cleaned out when the anonymous block is complete (except for things like uncommitted transactions). I've never seen memory leaks caused by CLOBs.
Although there are some potential space issues with large CLOBs. CLOBs can be stored in temporary tablespace, which is a finite resource and not always sized properly. . But reading a single CLOB at a time shouldn't be a problem, unless it's a multi-gigabyte file.

Oracle Temporary Table to convert a Long Raw to Blob

Questions have been asked in the past that seems to handle pieces of my full question, but I'm not finding a totally good answer.
Here is the situation:
I'm importing data from an old, but operational and production, Oracle server.
One of the columns is created as LONG RAW.
I will not be able to convert the table to a BLOB.
I would like to use a global temporary table to pull out data each time I call to the server.
This feels like a good answer, from here: How to create a temporary table in Oracle
CREATE GLOBAL TEMPORARY TABLE newtable
ON COMMIT PRESERVE ROWS
AS SELECT
MYID, TO_LOB("LONGRAWDATA") "BLOBDATA"
FROM oldtable WHERE .....;
I do not want the table hanging around, and I'd only do a chunk of rows at a time, to pull out the old table in pieces, each time killing the table. Is it acceptable behavior to do the CREATE, then do the SELECT, then DROP?
Thanks..
--- EDIT ---
Just as a follow up, I decided to take an even different approach to this.
Branching the strong-oracle package, I was able to do what I originally hoped to do, which was to pull the data directly from the table without doing a conversion.
Here is the issue I've posted. If I am allowed to publish my code to a branch, I'll post a follow up here for completeness.
Oracle ODBC Driver Release 11.2.0.1.0 says that Prefetch for LONG RAW data types is supported, which is true.
One caveat is that LONG RAW can technically be up to 2GB in size. I had to set a hard max size of 10MB in the code, which is adequate for my use, so far at least. This probably could be a variable sent in to the connection.
This fix is a bit off original topic now however, but it might be useful to someone else.
With Oracle GTTs, it is not be necessary to drop and create each time, and you don't need to worry about data "hanging around." In fact, it's inadvisable to drop and re-create. The structure itself persists, but the data in it does not. The data only persists within each session. You can test this by opening up two separate clients, loading data with one, and you will notice it's not there in the second client.
In effect, each time you open a session, it's like you are reading a completely different table, which was just truncated.
If you want to empty the table within your stored procedure, you can always truncate it. Within a stored proc, you will need to execute immediate if you do this.
This is really handy, but it also can make debugging a bear if you are implementing GTTs through code.
Out of curiosity, why a chunk at a time and not the entire dataset? What kind of data volumes are you talking about?
-- EDIT --
Per our comments conversation, this is very raw and untested, but I hope it will give you an idea what I mean:
CREATE OR REPLACE PROCEDURE LOAD_DATA()
AS
TOTAL_ROWS number;
TABLE_ROWS number := 1;
ROWS_AT_A_TIME number := 100;
BEGIN
select count (*)
into TOTAL_ROWS
from oldtable;
WHILE TABLE_ROWS <= TOTAL_ROWS
LOOP
execute immediate 'truncate table MY_TEMP_TABLE';
insert into MY_TEMP_TABLE
SELECT
MYID, TO_LOB(LONGRAWDATA) as BLOBDATA
FROM oldtable
WHERE ROWNUM between TABLE_ROWS and TABLE_ROWS + ROWS_AT_A_TIME;
insert into MY_REMOTE_TABLE#MY_REMOTE_SERVER
select * from MY_TEMP_TABLE;
commit;
TABLE_ROWS := TABLE_ROWS + ROWS_AT_A_TIME;
END LOOP;
commit;
end LOAD_DATA;

Return data rows from a pl/sql block

I want to write pl/sql code which utilizes a Cursor and Bulk Collect to retrieve my data. My database has rows in the order of millions, and sometimes I have to query it to fetch nearly all records on client's request. I do the querying and subsequent processing in batches, so as to not congest the server and show incremental progress to the client. I have seen that digging down for later batches takes considerably more time, which is why I am trying to do it by way of cursor.
Here is what should be simple pl/sql around my main sql query:
declare
cursor device_row_cur
is
select /my_query_details/;
type l_device_rows is table of device_row_cur%rowtype;
out_entries l_device_rows := l_device_rows();
begin
open device_row_cur;
fetch device_row_cur
bulk collect into out_entries
limit 100;
close device_row_cur;
end;
I am doing batches of 100, and fetching them into out_entries. The problem is that this block compiles and executes just fine, but doesn't return the data rows it fetched. I would like it to return those rows just the way a select would. How can this be achieved? Any ideas?
An anonymous block can't return anything. You can assign values to a bind variable, including a collection type or ref cursor, inside the block. But the collection would have to be defined, as well as declared, outside the block. That is, it would have to be a type you can use in plain SQL, not something defined in PL/SQL. At the moment you're using a PL/SQL type that is defined within the block, and a variable that is declared within the block too - so it's out of scope to the client, and wouldn't be a valid type outside it either. (It also doesn't need to be initialised, but that's a minor issue).
Dpending on how it will really be consumed, one option is to use a ref cursor, and you can declare and display that through SQL*Plus or SQL Developer with the variable and print commands. For example:
variable rc sys_refcursor
begin
open :rc for ( select ... /* your cursor statement */ );
end;
/
print rc
You can do something similar from a client application, e.g. have a function returning a ref cursor or a procedure with an out parameter that is a ref cursor, and bind that from the application. Then iterate over the ref cursor as a result set. But the details depend on the language your application is using.
Another option is to have a pipelined function that returns a table type - again defined at SQL level (with create type) not in PL/SQL - which might consume fewer resources than a collection that's returned in one go.
But I'd have to question why you're doing this. You said "digging down for later batches takes considerably more time", which sounds like you're using a paging mechanism in your query, generating a row number and then picking out a range of 100 within that. If your client/application wants to get all the rows then it would be simpler to have a single query execution but fetch the result set in batches.
Unfortunately without any information about the application this is just speculation...
I studied this excellent paper on optimizing pagination:
http://www.inf.unideb.hu/~gabora/pagination/article/Gabor_Andras_pagination_article.pdf
I used technique 6 mainly. It describes how to limit query to fetch page x and onward. For added improvement, you can limit it further to fetch page x alone. If used right, it can bring a performance improvement by a factor of 1000.
Instead of returning custom table rows (which is very hard, if not impossible to interface with Java), I eneded up opening a sys_refcursor in my pl/sql which can be interfaced such as:
OracleCallableStatement stmt = (OracleCallableStatement) connection.prepareCall(sql);
stmt.registerOutParameter(someIndex, OracleTypes.CURSOR);
stmt.execute();
resultSet = stmt.getCursor(idx);

PL/SQL Creating Associative Array With Employee Id as Key

I am trying to create a hash with employee_id (NUMBER(6,0)) as the key and salary (NUMBER(8,2)) as the value.
For that I have created a INDEX-OF table(associative array) in PL/SQL (Oracle 11g) using the following definition:
TYPE emp_title_hash IS TABLE OF employees.salary%type
INDEX BY employees.employee_id%type;
I am getting the following compilation error:
Error(22,28): PLS-00315: Implementation restriction: unsupported table index type
I am aware in this case that the only supported type for the index is STRING or PLS_INTEGER. This seems to really restrictive. Why exactly has this been imposed in Oracle ? Is there a work around to get the above done ?
Appreciate your comments / suggestions.
As someone already pointed out, you can use "index by pls_integer", since pls_integer can contain any number(6,0) value.
Surely it would be nice to be able to use any possible type to index a pl/sql associative array, but I have always managed a way of writing a function that builds a string that identifies the object instance I want to use as index value.
So, instead of writing:
type TMyAssociativeArray is table of MyDataType index by MyIndexType;
I write:
Type TMyAssociativeArray is table of MyDataType index by varchar2(30);
then I write a function that calculates a unique string from MyIndexType:
function GetHash(obj MyIndexType) return varchar2;
and, having written this function I can use it to simulate an associative array indexed by MyIndexType object:
so I can write
declare
arr TMyAssociativeArray;
obj TMyDataType;
idx TMyDataType;
begin
....
arr(GetHash(idx)) := obj;
end;
Here I am getting out of the strict question you asked, and giving you an advice about a possible other way of obtaining a quick customer->salary lookup cache... This is the final purpose of your associative-array question, as I can read from your comment, so maybe this could be useful:
If you are using associative arrays to build a fast look-up mechanism, and if you can use the oracle 11g R2 new features, a easier way of obtaining this caching is to rely on the native RESULT_CACHE feature which has been introduced both for queries (by using the RESULT_CACHE hint) and for pl/sql functions.
for pl/sql functions you can create a function whose result value is cached like this one:
create or replace function cached_employee_salary(employee number)
return number RESULT_CACHE is
result number;
begin
select salary
into result
from employees e
where e.code = employee;
return result;
end;
the RESULT_CACHE keyword instructs oracle to keep an in-memory cache of the result values and to reuse them on subsequent calls (when the function is called with the same parameters, of course).
this cache has these advantages, compared to the use of associative arrays:
it is shared among all sessions: cached data is not kept in the private memory allocated for each session, so it wastes less memory
oracle is smart enough to detect that the function calculates its results by accessing the employees table and automatically invalidates the cached results if the table data is being modified.
Of course I suggest you to run some tests to see if this optimization, in your case gives some tangible results. it mostly depends on how complex is the calculation you make in your function.
You can also rely on an analogous feature introduced for SQL queries, triggered by the /+RESULT_CACHE/ hint:
select /*+RESULT_CACHE*/
salary
from employees e
where e.code = employee;
return result
this hint instructs oracle to store and reuse (on subsequent executions) the result set of the query. it will be stored too in memory.
actually this hint has also the advantage that, being the hint -syntactically speaking- a special comment, this query will continue working without any modifications even on servers < 11gR2, whereas, for the function cache version, you should use some "conditional compilation" magic to make it compile also with previous server versions (for which it would be a normal function without any result caching)
I Hope this helps.

Why shouldn't I make all my PL/SQL-only VARCHAR2 32767 bytes?

Or should I ?
(The title is inspired by Gary Myers' comment in Why does Oracle varchar2 have a mandatory size as a definition parameter?)
Consider the following variables:
declare
-- database table column interfacing variable
v_a tablex.a%type; -- tablex.a is varchar2
-- PL/SQL only variable
v_b varchar2(32767); -- is this a poor convention ?
begin
select a into v_a from tablex where id = 1;
v_b := 'Some arbitrary string: ' || v_a; -- ignore potential ORA-06502
insert into tabley(id, a) values(1, v_a); -- tablex.a and tabley.a types match
v_b := v_b || ' More arbitrary characters';
end;
/
Variable v_a is used to interface a database table column and therefore uses a %type attribute. But if I know the data type is varchar2 why shouldn't I use varchar2(4000) or varchar2(32767) that also guarantee the string read from database column will always fit to the PL/SQL variable ? Is there any other argument against this convention except the superiority of %type attribute ?
Variable v_b is only used in PL/SQL code and is usually returned to a JDBC client (Java/Python program, Oracle SOA/OSB etc.) or dumped into a flat file (with UTL_FILE). If the varchar2 presents e.g. csv-line why I should bother to calculate the exact maximum possible length (except to verify the line will fit into 32767 bytes in all cases so I don't need a clob) and re-calculate every time my data model changes ?
There is plenty of questions that covers varchar2 length semantics in SQL and explains why varchar2(4000) is a poor practice in SQL. Also the difference between SQL and PL/SQL varchar2-type is well covered:
What is the size limit for a varchar2 PL/SQL subprogram argument in Oracle?
VARCHAR(MAX) versus VARCHAR(n) in Oracle
Why does Oracle varchar2 have a mandatory size as a definition parameter?
Why does VARCHAR need length specification?
What is the default size of a varchar2 input to Oracle stored procedure, and can it be changed?
Why using anything else but VARCHAR2(4000) to store strings in an Oracle database?
Why does an oracle plsql varchar2 variable need a size but a parameter does not?
Ask Tom question: "I work with a modelers group, who would like to define every varchar2 field with the maximum length."
The only place where I have seen this issue discussed is the points #3 and #4 in an answer by APC:
The database uses the length of a variable when allocating memory for PL/SQL collections. As that memory comes out of the PGA supersizing the variable declaration can lead to programs failing because the server has run out of memory.
There are similar issues with the declaration of single variables in PL/SQL programs, it is just that collections tend to multiply the problem.
E.g. Oracle PL/SQL Programming, 5th Edition By Steven Feuerstein doesn't mention any drawbacks of declaring too long varchar2 variables, so it can't be a critical mistake, right ?
Update
After some more googling I found out that Oracle documentation has evolved during releases:
A quote from PL/SQL User's Guide and Reference 10g Release 2 Chapter 3 PL/SQL Datatypes:
Small VARCHAR2 variables are optimized for performance, and larger ones are optimized for efficient memory use. The cutoff point is 2000 bytes. For a VARCHAR2 that is 2000 bytes or longer, PL/SQL dynamically allocates only enough memory to hold the actual value. For a VARCHAR2 variable that is shorter than 2000 bytes, PL/SQL preallocates the full declared length of the variable. For example, if you assign the same 500-byte value to a VARCHAR2(2000 BYTE) variable and to a VARCHAR2(1999 BYTE) variable, the former takes up 500 bytes and the latter takes up 1999 bytes.
A quote from PL/SQL User's Guide and Reference 11g Release 1 Chapter 3 PL/SQL Datatypes:
For a CHAR variable, or for a VARCHAR2 variable whose maximum size is less than 2,000 bytes, PL/SQL allocates enough memory for the maximum size at compile time. For a VARCHAR2 whose maximum size is 2,000 bytes or more, PL/SQL allocates enough memory to store the actual value at run time. In this way, PL/SQL optimizes smaller VARCHAR2 variables for performance and larger ones for efficient memory use.
For example, if you assign the same 500-byte value to VARCHAR2(1999 BYTE) and VARCHAR2(2000 BYTE) variables, PL/SQL allocates 1999 bytes for the former variable at compile time and 500 bytes for the latter variable at run time.
But PL/SQL User's Guide and Reference 11g Release 2 Chapter 3 PL/SQL Datatypes doesn't mention memory allocation any more and I fail to find any other information about memory allocation at all. (I'm using this release so I check only 11.2 documentation.) The same holds also for PL/SQL User's Guide and Reference 12c Release 1 Chapter 3 PL/SQL Datatypes.
I also found an answer by Jeffrey Kemp that addresses this question too. However Jeffrey's answer refers to 10.2 documentation and the question is not about PL/SQL at all.
It looks like this is one of the areas where the PL/SQL functionality has evolved over releases when Oracle has implemented different optimizations.
Note this also means some of the answers listed in the OP are also release specific even that is not explicitly mentioned in those questions/answers. When the time pass by and use of older Oracle releases ends (me daydreaming ?) that information will became outdated (might take decades thought).
The conclusion above is backed with the following quote from chapter 12 Tuning PL/SQL Applications for Performance of PL/SQL Language Reference 11g R1:
Declare VARCHAR2 Variables of 4000 or More Characters
You might need to allocate large VARCHAR2 variables when you are not sure how big an expression result will be. You can conserve memory by declaring VARCHAR2 variables with large sizes, such as 32000, rather than estimating just a little on the high side, such as by specifying 256 or 1000. PL/SQL has an optimization that makes it easy to avoid overflow problems and still conserve memory. Specify a size of more than 4000 characters for the VARCHAR2 variable; PL/SQL waits until you assign the variable, then only allocates as much storage as needed.
This issue is no longer mentioned in 11g R2 nor 12c R1 version of the document. This is in line with the evolution of the chapter 3 PL/SQL Datatypes.
Answer:
Since 11gR2 it makes no difference from memory use of point of view to use varchar2(10) or varchar2(32767). Oracle PL/SQL compiler will take care of the dirty details for you in an optimal fashion !
For releases prior to 11gR2 there is a cutoff-point where different memory management strategies are used and this is clearly documented in each release's PL/SQL Language Reference.
The above only applies to PL/SQL-only variables when there is no natural length restriction that can be derived from the problem domain. If a varchar2-variable represents a GTIN-14 then one should declare that as varchar2(14).
When PL/SQL-variable interfaces with a table column use %type-attribute as that is the zero-effort way to keep you PL/SQL-code and database structure in sync.
Memory test results:
I run a memory analysis in Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 with the following results:
str_size iterations UGA PGA
-------- ---------- ----- ------
10 100 65488 0
10 1000 65488 65536
10 10000 65488 655360
32767 100 65488 0
32767 1000 65488 65536
32767 10000 65488 655360
Because the PGA changes are identical and depend only on iterations and not str_size I conclude the varchar2 declared size doesn't matter. The test might be too naïve though - comments welcome !
The test script:
-- plsql_memory is a convenience package wrapping sys.v_$mystat s and
-- sys.v_$statname tables written by Steven Feuerstein and available in the
-- code-zip file accompanying his book.
set verify off
define str_size=&1
define iterations=&2
declare
type str_list_t is table of varchar2(&str_size);
begin
plsql_memory.start_analysis;
declare
v_strs str_list_t := str_list_t();
begin
for i in 1 .. &iterations
loop
v_strs.extend;
v_strs(i) := rpad(to_char(i), 10, to_char(i));
end loop;
plsql_memory.show_memory_usage;
end;
end;
/
exit
Test run example:
$ sqlplus -SL <CONNECT_STR> #memory-test.sql 32767 10000
Change in UGA memory: 65488 (Current = 1927304)
Change in PGA memory: 655360 (Current = 3572704)
PL/SQL procedure successfully completed.
$
Maybe it's because I came of age in the era when the most memory any system had was 48k and then, happy days, up to a full 64k. And it was not virtual, what you allocated is what you got (wyaiwyg) (WAY-WIG?). I see in a lot of younger programmers a tendency to be lazy in their design and hide design flaws by throwing more memory at it.
If we get into the habit of just typing varchar2(MAX) whenever we define a string variable, we stop thinking about the length. But sometimes length matters. If we haven't already done so, as soon as we type the ( then we should stop and put some thought into how big it really needs to be. Does the size matter here? If so, what is a reasonable maximum (or minimum)? This forces us to look beyond the bytes and fields and indexes to the actual "thing" we are trying to work with. That is never a bad idea.
Discipline is hard. We should be developing habits to enforce it whenever we can. Good data and code design is difficult by nature. There are tools and techniques we can use to make it a little easier, but we shouldn't be doing anything just because it is easier. That's the path to the Dark Side and it catches up to us sooner or later.
I think the problem is the same as in any other piece of software: allocating too much memory will decrease performance and likeliness of failing due to the facts it has occupied all memory.
In my opinion, use %type as much as possible since it prevents mistakes when changing data type or length, and makes clear what the origin is from that variable.

Resources