Need advice on how to optimize my function PL/SQL - oracle

I need help/advice on optimizing my function. The function should look for attribute values in the table REASON_ATTR_VALUE by using the attribute names from the input XML file and generate the output XML file.
In the function from the input file, I transfer the data to collection v_attrs_in (containing reason identifiers and attribute names to be checked), check the data in the table REASON_ATTR_VALUE, and form another collection v_attrs (containing reason identifiers and attribute names and values). For this collection, I generate an output file.
Full text of function:
function get_reason_attr_value (p_reasons clob) return clob
is
v_attrs reason_attrs;
v_attrs_in reasons;
v_reason_ids reason_id_list := reason_id_list();
i pls_integer;
v_xml_reasons_in xmltype;
v_xml_reasons_out xmltype;
v_xml_reason xmltype;
begin
if p_reasons is null then
throw('Input parameter is empty');
end if;
v_xml_reasons_in := xmltype(p_reasons);
select xmlelement("Reasons") into v_xml_reasons_out from dual;
select
id as reason_id,
name as attr_name
bulk collect into v_attrs_in
from XMLTable('Reasons/Reason' passing (v_xml_reasons_in)
columns id number path 'Id',
attr_list XMLType path 'AttrList/Attr')(+) xt_att,
XMLTable('Attr' passing (xt_att.attr_list)
columns name varchar2(200) path 'Name');
if v_attrs_in is not empty then
select
x.reason_id as reason_id,
t.attr_name as attr_name,
a.attr_value as attr_value
bulk collect into v_attrs
from table(v_attrs_in) x
join reason_attrs t on t.attr_name = x.attr_name
left join reason_attr_value a on a.reason_id = x.reason_id and a.attr_id = t.attr_id
and sysdate between a.date_begin and nvl(a.date_end, sysdate + 1);
if v_attrs is not empty then
select distinct x.reason_id
bulk collect into v_reason_ids
from table (v_attrs) x
where x.attr_value is not null;
if v_reason_ids is not empty then
for i in v_reason_ids.first..v_reason_ids.last loop
select xmlelement("Reason",
xmlforest
(
v_reason_ids(i) as "Id",
xmlagg(xmlelement("Attr", XMLAttributes(x.attr_name as "name"), x.attr_value)) as "AttrList"
))
into v_xml_reason
from table (v_attrs) x
where x.reason_id = v_reason_ids(i);
v_xml_reasons_out := v_xml_reasons_out.appendChildXML('//Reasons',v_xml_reason);
end loop;
end if;
end if;
end if;
return v_xml_reasons_out.getClobVal();
end get_reason_attr_value;
Description of the collections used in the function:
type reason is record
(
reason_id number,
attr_name varchar2(200)
);
type reason_attr is record
(
reason_id number,
attr_name varchar2(200),
attr_value varchar2(2000)
);
type reasons is table of reason;
type reason_attrs is table of reason_attr;
type reason_id_list is table of number;
Example input XML file (may contain thousands of element Reason):
<Reasons>
<Reason>
<Id>3</Id>
<AttrList>
<Attr>
<Name>include_in_recalculation</Name>
</Attr>
</AttrList>
</Reason>
<Reason>
<Id>5</Id>
<AttrList>
<Attr>
<Name>include_in_recalculation</Name>
</Attr>
</AttrList>
</Reason>
<Reasons>

Related

PL/SQL : Need to compare data for every field in a table in plsql

I need to create a procedure which will take collection as an input and compare the data with staging table data row by row for every field (approx 50 columns).
Business logic :
whenever a staging table column value will mismatch with the corresponding collection variable value then i need to update 'FAIL' into staging table STATUS column and reason into REASON column for that row.
If matched then need to update 'SUCCESS' in STATUS column.
Payload will be approx 500 rows in each call.
I have created below sample script:
PKG Specification :
CREATE OR REPLACE
PACKAGE process_data
IS
TYPE pass_data_rec
IS
record
(
p_eid employee.eid%type,
p_ename employee.ename%type,
p_salary employee.salary%type,
p_dept employee.dept%type
);
type p_data_tab IS TABLE OF pass_data_rec INDEX BY binary_integer;
PROCEDURE comp_data(inpt_data IN p_data_tab);
END;
PKG Body:
CREATE OR REPLACE
PACKAGE body process_data
IS
PROCEDURE comp_data (inpt_data IN p_data_tab)
IS
status VARCHAR2(10);
reason VARCHAR2(1000);
cnt1 NUMBER;
v_eid employee_copy.eid%type;
v_ename employee_copy.ename%type;
BEGIN
FOR i IN 1..inpt_data.count
LOOP
SELECT ec1.eid,ec1.ename,COUNT(*) over () INTO v_eid,v_ename,cnt1
FROM employee_copy ec1
WHERE ec1.eid = inpt_data(i).p_eid;
IF cnt1 > 0 THEN
IF (v_eid=inpt_data(i).p_eid AND v_ename = inpt_data(i).p_ename) THEN
UPDATE employee_copy SET status = 'SUCCESS' WHERE eid = inpt_data(i).p_eid;
ELSE
UPDATE employee_copy SET status = 'FAIL' WHERE eid = inpt_data(i).p_eid;
END IF;
ELSE
NULL;
END IF;
END LOOP;
COMMIT;
status :='success';
EXCEPTION
WHEN OTHERS THEN
status:= 'fail';
--reason:=sqlerrm;
END;
END;
But in this approach i have below mentioned issues.
Need to declare all local variables for each column value.
Need to compare all variable data using 'and' operator. Not sure whether it is correct way or not because if there are 50 columns then if condition will become very heavy.
IF (v_eid=inpt_data(i).p_eid AND v_ename = inpt_data(i).p_ename) THEN
Need to update REASON column when any column data mismatched (first mismatched column name) for that row, in this approach i am not able to achieve.
Please suggest any other good way to achieve this requirement.
Edit :
There is only one table at my end i.e target table. Input will come from any other source as collection object.
REVISED Answer
You could load the the records into t temp table, but unless you want additional processing it's not necessary. AFAIK there is no way to identify the offending column (first one only) without slugging through column-by-column. However, your other concern having to declare a variable is not necessary. You can declare a single variable defined as %rowtype which gives you access to each column by name.
Looping through an array of data to find the occasional error is just bad (imho) with SQL available to eliminate the good ones in one fell swoop. And it's available here. Even though your input is a array we can use as a table by using the TABLE operator, which allows an array (collection) as though it were a database table. So the MINUS operator can till be employed. The following routine will set the appropriate status and identify the first miss matched column for each entry in the input array. It reverts to your original definition in package spec, but replaces the comp_data procedure.
create or replace package body process_data
is
procedure comp_data (inpt_data in p_data_tab)
is
-- define local array to hold status and reason for ecah.
type status_reason_r is record
( eid employee_copy.eid%type
, status employee_copy.status%type
, reason employee_copy.reason%type
);
type status_reason_t is
table of status_reason_r
index by pls_integer;
status_reason status_reason_t := status_reason_t();
-- define error array to contain the eid for each that have a mismatched column
type error_eids_t is table of employee_copy.eid%type ;
error_eids error_eids_t;
current_matched_indx pls_integer;
/*
Helper function to identify 1st mismatched column in error row.
Here is where we slug our way through each column to find the first column
value mismatch. Note: There is actually validate the column sequence, but
for purpose here we'll proceed in the input data type definition.
*/
function identify_mismatch_column(matched_indx_in pls_integer)
return varchar2
is
employee_copy_row employee_copy%rowtype;
mismatched_column employee_copy.reason%type;
begin
select *
into employee_copy_row
from employee_copy
where employee_copy.eid = inpt_data(matched_indx_in).p_eid;
-- now begins the task of finding the mismatched column.
if employee_copy_row.ename != inpt_data(matched_indx_in).p_ename
then
mismatched_column := 'employee_copy.ename';
elsif employee_copy_row.salary != inpt_data(matched_indx_in).p_salary
then
mismatched_column := 'employee_copy.salary';
elsif employee_copy_row.dept != inpt_data(matched_indx_in).p_dept
then
mismatched_column := 'employee_copy.dept';
-- elsif continue until ALL columns tested
end if;
return mismatched_column;
exception
-- NO_DATA_FOUND is the one error that cannot actually be reported in the customer_copy table.
-- It occurs when an eid exista in the input data but does not exist in customer_copy.
when NO_DATA_FOUND
then
dbms_output.put_line( 'Employee (eid)='
|| inpt_data(matched_indx_in).p_eid
|| ' does not exist in employee_copy table.'
);
return 'employee_copy.eid ID is NOT in table';
end identify_mismatch_column;
/*
Helper function to find specified eid in the initial inpt_data array
Since the resulting array of mismatching eid derive from a select without sort
there is no guarantee the index values actually match. Nor can we sort to build
the error array, as there is no way to know the order of eid in the initial array.
The following helper identifies the index value in the input array for the specified
eid in error.
*/
function match_indx(eid_in employee_copy.eid%type)
return pls_integer
is
l_at pls_integer := 1;
l_searching boolean := true;
begin
while l_at <= inpt_data.count
loop
exit when eid_in = inpt_data(l_at).p_eid;
l_at := l_at + 1;
end loop;
if l_at > inpt_data.count
then
raise_application_error( -20199, 'Internal error: Find index for ' || eid_in ||' not found');
end if;
return l_at;
end match_indx;
-- Main
begin
-- initialize status table for each input enter
-- additionally this results is a status_reason table in a 1:1 with the input array.
for i in 1..inpt_data.count
loop
status_reason(i).eid := inpt_data(i).p_eid;
status_reason(i).status :='SUCCESS';
end loop;
/*
We can assume the majority of data in the input array is valid meaning the columns match.
We'll eliminate all value rows by selecting each and then MINUSing those that do match on
each column. To accomplish this cast the input with TABLE function allowing it's use in SQL.
Following produces an array of eids that have at least 1 column mismatch.
*/
select p_eid
bulk collect into error_eids
from (select p_eid, p_ename, p_salary, p_dept from TABLE(inpt_data)
minus
select eid, ename, salary, dept from employee_copy
) exs;
/*
The error_eids array now contains the eid for each miss matched data item.
Mark the status as failed, then begin the long hard process of identifying
the first column causing the mismatch.
The following loop used the nested functions to slug the way through.
This keeps the main line logic clear.
*/
for i in 1 .. error_eids.count -- if all inpt_data rows match then count is 0, we bypass the enttire loop
loop
current_matched_indx := match_indx(error_eids(i));
status_reason(current_matched_indx).status := 'FAIL';
status_reason(current_matched_indx).reason := identify_mismatch_column(current_matched_indx);
end loop;
-- update employee_copy with appropriate status for each row in the input data.
-- Except for any cid that is in the error eid table but doesn't exist in the customer_copy table.
forall i in inpt_data.first .. inpt_data.last
update employee_copy
set status = status_reason(i).status
, reason = status_reason(i).reason
where eid = inpt_data(i).p_eid;
end comp_data;
end process_data;
There are a couple other techniques used you may want to look into if you are not familiar with them:
Nested Functions. There are 2 functions defined and used in the procedure.
Bulk Processing. That is Bulk Collect and Forall.
Good Luck.
ORIGINAL Answer
It is NOT necessary to compare each column nor build a string by concatenating. As you indicated comparing 50 columns becomes pretty heavy. So let the DBMS do most of the lifting. Using the MINUS operator does exactly what you need.
... the MINUS operator, which returns only unique rows returned by the
first query but not by the second.
Using that this task needs only 2 Updates: 1 to mark "fail", and 1 to mark "success". So try:
create table e( e_id integer
, col1 varchar2(20)
, col2 varchar2(20)
);
create table stage ( e_id integer
, col1 varchar2(20)
, col2 varchar2(20)
, status varchar2(20)
, reason varchar2(20)
);
-- create package spec and body
create or replace package process_data
is
procedure comp_data;
end process_data;
create or replace package body process_data
is
package body process_data
procedure comp_data
is
begin
update stage
set status='failed'
, reason='No matching e row'
where e_id in ( select e_id
from (select e_id, col1, col2 from stage
except
select e_id, col1, col2 from e
) exs
);
update stage
set status='success'
where status is null;
end comp_data;
end process_data;
-- test
-- populate tables
insert into e(e_id, col1, col2)
select (1,'ABC','def') from dual union all
select (2,'No','Not any') from dual union all
select (3,'ok', 'best ever') from dual union all
select (4,'xx','zzzzzz') from dual;
insert into stage(e_id, col1, col2)
select (1,'ABC','def') from dual union all
select (2,'No','Not any more') from dual union all
select (4,'yy', 'zzzzzz') from dual union all
select (5,'no e','nnnnn') from dual;
-- run procedure
begin
process_data.comp_date;
end;
-- check results
select * from stage;
Don't ask. Yes, you to must list every column you wish compared in each of the queries involved in the MINUS operation.
I know the documentation link is old (10gR2), but actually finding Oracle documentation is a royal pain. But the MINUS operator still functions the same in 19c;

How to access and query objects passed as parameter to a procedure while converting from Oracle to postgresql

I have a procedure in Oracle that I need to convert to Postgresql and need help on it. It paases a collection of objects in a procedure.The procedure then checks if each object is present in a database table or not and if present it gives a message that , that specific element is found/present. if some element that is paassed to the procedure is not present in the table, the procedure just doesnt do anything. I have to write equivalent of that in postgresql. I think the heart of the issue is this statement:
SELECT COUNT (*)
INTO v_cnt
FROM **TABLE (p_cust_tab_type_i)** pt
WHERE pt.ssn = cc.ssn;
In Oracle a collection can be treated as a table and one can query it but I dont know how to do that in postgresql. The code to create the table, add data, create the procedure, call the procedure by passing the collection (3 objects) and output of that is posted below. Can someone suggest how this can be done in postgresql?
Following the oracle related code and details:
--create table
create table temp_n_tab1
(ssn number,
fname varchar2(20),
lname varchar2(20),
items varchar2(100));
/
--add data
insert into temp_n_tab1 values (1,'f1','l1','i1');
--SKIP no. ssn no. 2 intentionally..
insert into temp_n_tab1 values (3,'f3','l3','i3');
insert into temp_n_tab1 values (4,'f4','l4','i4');
insert into temp_n_tab1 values (5,'f5','l5','i5');
insert into temp_n_tab1 values (6,'f6','l6','i6');
commit;
--create procedure
SET SERVEROUTPUT ON
CREATE OR REPLACE PROCEDURE temp_n_proc (
p_cust_tab_type_i IN temp_n_customer_tab_type)
IS
t_cust_tab_type_i temp_n_customer_tab_type;
v_cnt NUMBER;
v_ssn temp_n_tab1.ssn%TYPE;
CURSOR c
IS
SELECT ssn
FROM temp_n_tab1
ORDER BY 1;
BEGIN
--t_cust_tab_type_i := p_cust_tab_type_i();
FOR cc IN c
LOOP
SELECT COUNT (*)
INTO v_cnt
FROM TABLE (p_cust_tab_type_i) pt
WHERE pt.ssn = cc.ssn;
IF (v_cnt > 0)
THEN
DBMS_OUTPUT.put_line (
'The array element '
|| TO_CHAR (cc.ssn)
|| ' exists in the table.');
END IF;
END LOOP;
EXCEPTION
WHEN OTHERS
THEN
DBMS_OUTPUT.PUT_LINE (SQLERRM);
END;
/
--caller proc
SET SERVEROUTPUT ON
declare
array temp_n_customer_tab_type := temp_n_customer_tab_type();
begin
for i in 1 .. 3
loop
array.extend;
array(i) := temp_n_cust_header_type( i, 'name ' || i, 'lname ' || i,i*i*i*i );
end loop;
temp_n_proc( array );
end;
/
caller proc output:
The array element 1 exists in the table.
The array element 3 exists in the table.
When you create a table in Postgres, a type with the same name is also created. So you can simply pass an array of the table's type as a parameter to the function.
Inside the function you can then use unnest() to treat the array like a table.
The following is the closest match to your original Oracle code:
create function temp_n_proc(p_cust_tab_type_i temp_n_tab1[])
returns void
as
$$
declare
l_rec record;
l_msg text;
l_count integer;
BEGIN
for l_rec in select t1.ssn
from temp_n_tab1 t1
loop
select count(*)
into l_count
from unnest(p_cust_tab_type_i) as t
where t.ssn = l_rec.ssn;
if l_count > 0 then
raise notice 'The array element % exist in the table', l_rec.ssn;
end if;
end loop;
END;
$$
language plpgsql;
The row-by-row processing is not a good idea to begin with (neither in Postgres, nor in Oracle). It would be a lot more efficient to get the existing elements in a single query:
create function temp_n_proc(p_cust_tab_type_i temp_n_tab1[])
returns void
as
$$
declare
l_rec record;
l_msg text;
BEGIN
for l_rec in select t1.ssn
from temp_n_tab1 t1
where t1.ssn in (select t.ssn
from unnest(p_cust_tab_type_i) as t)
loop
raise notice 'The array element % exist in the table', l_rec.ssn;
end loop;
return;
END;
$$
language plpgsql;
You can call the function like this:
select temp_n_proc(array[row(1,'f1','l1','i1'),
row(2,'f2','l2','i2'),
row(3,'f3','l3','i3')
]::temp_n_tab1[]);
However a more "Postgres" like and much more efficient way would be to not use PL/pgSQL for this, but create a simple SQL function that returns the messages as a result:
create or replace function temp_n_proc(p_cust_tab_type_i temp_n_tab1[])
returns table(message text)
as
$$
select format('The array element %s exist in the table', t1.ssn)
from temp_n_tab1 t1
where t1.ssn in (select t.ssn
from unnest(p_cust_tab_type_i) as t)
$$
language sql;
This returns the output of the function as a result rather than using the clumsy raise notice.
You can use it like this:
select *
from temp_n_proc(array[row(1,'f1','l1','i1'),
row(2,'f2','l2','i2'),
row(3,'f3','l3','i3')
]::temp_n_tab1[]);

How to deal with sequence in insert from XMLTable?

I have write a PL/SQL function that takes input in XML format for the
following table:
TABLE: TBL_MEDICAL_CENTER_BILLS
Name Null Type
------------- -------- -------------
MED_RECORDNO NOT NULL NUMBER
MED_EMPID NVARCHAR2(10)
MED_BILL_HEAD NVARCHAR2(20)
MED_DATE DATE
MED_AMOUNT FLOAT(126)
Here is the function code:
FUNCTION save_medical_center_bills(medical_bill_data NVARCHAR2 ) RETURN clob IS ret clob;
xmlData XMLType;
v_code NUMBER;
v_errm VARCHAR2(100);
BEGIN
xmlData:=XMLType(medical_bill_data);
INSERT INTO TBL_MEDICAL_CENTER_BILLS SELECT x.* FROM XMLTABLE('/medical_center_bill'
PASSING xmlData
COLUMNS MED_RECORDNO NUMBER PATH 'MED_RECORDNO' default null,
MED_EMPID NVARCHAR2(11) PATH 'employee_id',
MED_BILL_HEAD NVARCHAR2(20) PATH 'bill_head' ,
MED_DATE DATE PATH 'effective_date',
MED_AMOUNT FLOAT PATH 'bill_amount'
) x;
ret:=to_char(sql%rowcount);
COMMIT;
RETURN '<result><status affectedRow='||ret||'>success</status></result>';
EXCEPTION
WHEN OTHERS THEN
v_code := SQLCODE;
v_errm := SUBSTR(SQLERRM, 1, 100);
DBMS_OUTPUT.PUT_LINE (v_code || ' ' || v_errm);
-- '<result><status>Error</status> <error_message>'|| 'Error Code:' || v_code || ' ' || 'Error Message:' || v_errm ||'</error_message> </result>';
RETURN '<result><status>Error</status> <error_message>'|| 'Error Message:' || v_errm ||'</error_message> </result>';
END save_medical_center_bills;
However, I want to keep table's first column MED_RECORDNO as incrementing sequence (at the moment I am keeping it null since I don't know how to put the sequence in the XMLTable clause) and the rest of the
inputs [MED_EMPID, MED_BILL_HEAD , MED_DATE , MED_AMOUNT] will be taken from the XML passed to the function.
I created a sequence and a trigger to keep this sequence incremented for that table column MED_RECORDNO:
CREATE SEQUENCE MED_RECORDNO_SEQ;
create or replace TRIGGER MED_RECORDNO_TRIGGER
BEFORE INSERT ON TBL_MEDICAL_CENTER_BILLS FOR EACH ROW
WHEN (new.MED_RECORDNO is null)
DECLARE
v_id TBL_MEDICAL_CENTER_BILLS.MED_RECORDNO%TYPE;
BEGIN
SELECT MED_RECORDNO_seq.nextval INTO v_id FROM DUAL;
:new.MED_RECORDNO := v_id;
END;
As you can see, my XMLTable is inserting 4 column values in a 5 column table, because columns MED_RECORDNO will take its value from sequence MED_RECORDNO_SEQ using TRIGGER MED_RECORDNO_TRIGGER.
I don't know any thing about doing this. If you have ever experience such things, then please share your idea.
I sort of hinted at this in an earlier answer. You should specify the names of of the columns in the table you are inserting into; this is good practice even if you are populating all of them, as it will avoid surprises if the table structure changes (or differs between environments), and makes it much easier to spot like having columns or values in the wrong order.
INSERT INTO TBL_MEDICAL_CENTER_BILLS (MED_EMPID, MED_BILL_HEAD, MED_DATE, MED_AMOUNT)
SELECT x.MED_EMPID, x.MED_BILL_HEAD, x.MED_DATE, x.MED_AMOUNT
FROM XMLTABLE('/medical_center_bill'
PASSING xmlData
COLUMNS MED_EMPID NVARCHAR2(11) PATH 'employee_id',
MED_BILL_HEAD NVARCHAR2(20) PATH 'bill_head' ,
MED_DATE DATE PATH 'effective_date',
MED_AMOUNT FLOAT PATH 'bill_amount'
) x;
The insert you have should actually work (if the column order in the table matches); the trigger will still replace the null value you get from the XMLTable with the sequence value. At least, until you make the MED_RECORDNO column not-null, and you probably want to if it's the primary key.
Incidentally, if you're on 11g or higher your trigger can assign the sequence straight to the NEW pseudorecord:
create or replace TRIGGER MED_RECORDNO_TRIGGER
BEFORE INSERT ON TBL_MEDICAL_CENTER_BILLS
FOR EACH ROW
BEGIN
:new.MED_RECORDNO := MED_RECORDNO_seq.nextval;
END;
The when null check implies you sometimes want to allow a value to be specified; that is a bad idea as manually inserted values can clash with sequence values, either giving you duplicates or a unique/primary key exception.

How to use session-global variables of type collection in oracle

I have a package which declares a collection of type table of some database table's %rowtype. It also declares a function to populate the package-level variable with some data. I can now print the data with dbms_output, seems fine.
But when I use the package-level variable in some sql I get the following error:
ORA-21700: object does not exist or is marked for delete
ORA-06512: at "TESTDB.SESSIONGLOBALS", line 17
ORA-06512: at line 5
Here is my code:
create some dummy data:
drop table "TESTDATA";
/
CREATE TABLE "TESTDATA"
( "ID" NUMBER NOT NULL ENABLE,
"NAME" VARCHAR2(20 BYTE),
"STATUS" VARCHAR2(20 BYTE)
);
/
insert into "TESTDATA" (id, name, status) values (1, 'Hans Wurst', 'J');
insert into "TESTDATA" (id, name, status) values (2, 'Hans-Werner', 'N');
insert into "TESTDATA" (id, name, status) values (3, 'Hildegard v. Bingen', 'J');
/
now create the package:
CREATE OR REPLACE
PACKAGE SESSIONGLOBALS AS
type t_testdata is table of testdata%rowtype;
v_data t_testdata := t_testdata();
function load_testdata return t_testdata;
END SESSIONGLOBALS;
and the package body:
CREATE OR REPLACE
PACKAGE BODY SESSIONGLOBALS AS
function load_testdata return t_testdata AS
v_sql varchar2(500);
BEGIN
if SESSIONGLOBALS.v_data.count = 0
then
v_sql := 'select * from testdata';
execute immediate v_sql
bulk collect into SESSIONGLOBALS.v_data;
dbms_output.put_line('data count:');
dbms_output.put_line(SESSIONGLOBALS.v_data.count);
end if; -- SESSIONGLOBALS.v_data.count = 0
-- ******************************
-- this line throws the error
insert into testdata select * from table(SESSIONGLOBALS.v_data);
-- ******************************
return SESSIONGLOBALS.v_data;
END load_testdata;
END SESSIONGLOBALS;
execute the sample:
DECLARE
v_Return SESSIONGLOBALS.T_TESTDATA;
BEGIN
v_Return := SESSIONGLOBALS.LOAD_TESTDATA();
dbms_output.put_line('data count (direct access):');
dbms_output.put_line(SESSIONGLOBALS.v_data.count);
dbms_output.put_line('data count (return value of function):');
dbms_output.put_line(v_Return.count);
END;
If the line marked above is commented out i get the expected result.
So can anyone tell me why the exception stated above occurs?
BTW: it is absolutely nessecary for me to execute the statement which populates the collection with data as dynamic sql because the tablename is not known at compiletime. (v_sql := 'select * from testdata';)
the solution is to use pipelined functions in the package
see: http://docs.oracle.com/cd/B19306_01/appdev.102/b14289/dcitblfns.htm#CHDJEGHC ( => section Pipelining Between PL/SQL Table Functions does the trick).
my package looks like this now (please take the table script from my question):
create or replace
PACKAGE SESSIONGLOBALS AS
v_force_refresh boolean;
function set_force_refresh return boolean;
type t_testdata is table of testdata%rowtype;
v_data t_testdata;
function load_testdata return t_testdata;
function get_testdata return t_testdata pipelined;
END SESSIONGLOBALS;
/
create or replace
PACKAGE BODY SESSIONGLOBALS AS
function set_force_refresh return boolean as
begin
SESSIONGLOBALS.v_force_refresh := true;
return true;
end set_force_refresh;
function load_testdata return t_testdata AS
v_sql varchar2(500);
v_i number(10);
BEGIN
if SESSIONGLOBALS.v_data is null then
SESSIONGLOBALS.v_data := SESSIONGLOBALS.t_testdata();
end if;
if SESSIONGLOBALS.v_force_refresh = true then
SESSIONGLOBALS.v_data.delete;
end if;
if SESSIONGLOBALS.v_data.count = 0
then
v_sql := 'select * from testdata';
execute immediate v_sql
bulk collect into SESSIONGLOBALS.v_data;
end if; -- SESSIONGLOBALS.v_data.count = 0
return SESSIONGLOBALS.v_data;
END load_testdata;
function get_testdata return t_testdata pipelined AS
v_local_data SESSIONGLOBALS.t_testdata := SESSIONGLOBALS.load_testdata();
begin
if v_local_data.count > 0 then
for i in v_local_data.first .. v_local_data.last
loop
pipe row(v_local_data(i));
end loop;
end if;
end get_testdata;
END SESSIONGLOBALS;
/
now i can do a select in sql like this:
select * from table(SESSIONGLOBALS.get_testdata());
and my data collection is only populated once.
nevertheless it is quite not comparable with a simple
select * from testdata;
from a performace point of view but i'll try out this concept for some more complicated use cases. the goal is to avoid doing some really huge select statements involving lots of tables distributed among several schemas (english plural for schema...?).
The syntax you use does not work:
insert into testdata select * from table(SESSIONGLOBALS.v_data); -- does not work
You have to use something like that:
forall i in 1..v_data.count
INSERT INTO testdata VALUES (SESSIONGLOBALS.v_data(i).id,
SESSIONGLOBALS.v_data(i).name,
SESSIONGLOBALS.v_data(i).status);
(which actually duplicates the rows in the table)
Package-level types cannot be used in SQL. Even if your SQL is called from within a package, it still can't see that package's types.
I'm not sure how you got that error message, when I compiled the package I got this error, which gives a good hint at the problem:
PLS-00642: local collection types not allowed in SQL statements
To fix this problem, create a type and a nested table of that type:
create or replace type t_testdata_rec is object
(
"ID" NUMBER,
"NAME" VARCHAR2(20 BYTE),
"STATUS" VARCHAR2(20 BYTE)
);
create or replace type t_testdata as table of t_testdata_rec;
/
The dynamic SQL to populate the package variable gets more complicated:
execute immediate
'select cast(collect(t_testdata_rec(id, name, status)) as t_testdata)
from testdata ' into SESSIONGLOBALS.v_data;
But now the insert will work as-is.

How to return a Cursor for pl/sql table

I select data from several tables. Then i need to edit the data returned from the cursor before returning. The cursor will then be passed to a perl script to display the rows.
To that i build a pl/sql table as in the following code. What i need to know is how to return the to that table ?
At present i get the error "table or view doesn't exist". Test code i use for a simple table is attached here.
CREATE OR REPLACE FUNCTION test_rep
RETURN SYS_REFCURSOR
AS
CURSOR rec_Cur IS
SELECT table1.NAME,
table1.ID
FROM TESTREPORT table1;
TYPE rec_Table IS TABLE OF rec_Cur%ROWTYPE INDEX BY PLS_INTEGER;
working_Rec_Table rec_Table;
TYPE n_trade_rec IS RECORD
(
NAME VARCHAR2(15),
ID NUMBER
);
TYPE ga_novated_trades IS TABLE OF n_trade_rec index by VARCHAR2(15);
va_novated_trades ga_novated_trades;
v_unique_key VARCHAR2(15);
TYPE db_cursor IS REF CURSOR;
db_cursor2 db_cursor;
BEGIN
OPEN rec_Cur;
FETCH rec_Cur BULK COLLECT INTO working_Rec_Table;
FOR I IN 1..working_Rec_Table.COUNT LOOP
v_unique_key := working_Rec_Table(I).NAME;
va_novated_trades(v_unique_key).NAME := working_Rec_Table(I).NAME;
va_novated_trades(v_unique_key).ID := working_Rec_Table(I).ID;
END LOOP; --FOR LOOP
OPEN db_cursor2 FOR SELECT * FROM va_novated_trades; --ERROR LINE
CLOSE rec_Cur;
RETURN db_cursor2;
END test_rep;
/
Basically there is a way to select from a table type in oracle using the TABLE() function
SELECT * FROM table(va_novated_trades);
But this works only for schema table types and on plsql tables (table types defined in the SCHEMA and not in a plsql package):
CREATE TYPE n_trade_rec AS OBJECT
(
NAME VARCHAR2(15),
ID NUMBER
);
CREATE TYPE ga_novated_trades AS TABLE OF n_trade_rec;
But I still think you should try to do it all in a query (and/or in the perl script),
For example, there is one field where i have to analyse the 4th
character and then edit other fields accordingly
This can be achieved in the query, could be something like:
select case when substr(one_field, 4, 1) = 'A' then 'A.' || sec_field
when substr(one_field, 4, 1) = 'B' then 'B.' || sec_field
else sec_field
end as new_sec_field,
case when substr(one_field, 4, 1) = 'A' then 100 * trd_field
when substr(one_field, 4, 1) = 'B' then 1000 * trd_field
else trd_field
end as new_trd_field,
-- and so on
from TESTREPORT

Resources