I have cursor it selects from TableA then Fetch Loop that inserts into TableB.
I want to check if the value already exists in the TableB.
If it exists then I want to skip the insert.
create or replace
PROCEDURE DAILY_RPT (
v_start IN DATE,
v_end IN DATE)
IS
ao_out_no out_pair.out_no%type;
cursor get is
SELECT ao_out_no from tableA;
BEGIN
open get;
LOOP
fetch get into ao_out_no;
EXIT WHEN get%NOTFOUND;
if (ao_out_no = (select out_no from TableA where out_no = ao_out_no) THEN
--DO NOTHING
else
INSERT INTO TABLEB(OUT_NO) VALUES (ao_out_no);
end if;
END LOOP;
close get;
END;
I used IF CONDITION however, I used variable into if condition & I am getting below.
PLS-00405: subquery not allowed in this context
if (ao_out_no = (select out_no from TableA where out_no = ao_out_no) THEN
You don't need cursor or PL/SQL at all:
INSERT INTO TABLEB(OUT_NO)
SELECT ao_out_no
FROM tableA ta
WHERE ... -- filtering rows
AND NOT EXISTS (SELECT * From TableB tb WHERE tb.OUT_NO = ta.ao_out_no);
Use the following :
for i in (
select out_no from TableA where out_no
)
loop
if i.out_no = ao_out_no
then
-- DO NOTHING
else
...
or
create a new variable named x, and then assign a value to it by
select out_no into x from TableA where out_no = ao_out_no;
and check returning value for x.
With corrected syntax, it would be something like this:
create or replace procedure daily_rpt
( v_start in date
, v_end in date )
as
begin
for r in (
select ao_out_no, 0 as exists_check
from tablea
)
loop
select count(*) into exists_check
from tablea
where out_no = r.ao_out_no
and rownum = 1;
if r.exists_check > 0 then
--DO NOTHING
else
insert into tableb (out_no) values (r.ao_out_no);
end if;
end loop;
end;
However, it's inefficient to query all of the rows and then do an additional lookup for each row to decide whether you want to use it, as SQL can do that kind of thing for you. So version 2 might be something like:
create or replace procedure daily_rpt
( v_start in date
, v_end in date )
as
begin
for r in (
select ao_out_no
from tablea
where not exists
( select count(*)
from tablea
where out_no = r.ao_out_no
and rownum = 1 )
)
loop
insert into tableb (out_no) values (r.ao_out_no);
end loop;
end;
at which point you might replace the whole loop with an insert ... where not exists (...) statement.
Related
I'm trying to build a data warehouse based on a star schema with 5 dimension tables and 1 facts table using two sets of data, MASTERDATA which holds 100 records and DATASTREAM which holds 10,000 records.
I am reading 100 records from DATASTREAM as an input into a cursor then reading the cursor record by record and then retrieving the relevant records from MASTERDATA on the index product_id as a index nested loop join. After this I am loading the new attributes from the transaction tuple inside the relevant dimension and fact tables.
However, I have a few errors. I'm just looking for help to understand why I am getting the errors I am getting. The errors at the moment are:
Error(98,6):PL/SQL:SQL Statement Ignored
Error(101,5):PL/SQL: ORA-00933: SQL command not properly ended
Error(105,8):PLS-00103:Encountered the symbol "LOOP" when expecting one of the following: if
Error(113):PLS-00103:Encountered the symbol "end-of-file" when expecting one of the following: ;
My code:
CREATE OR REPLACE PROCEDURE transactionINLJ AS TYPE t_cursor is ref cursor;
v_cursor t_cursor;
v_cursor_records DATASTREAM%rowtype;
record_100 varchar2(300);
rec number;
v_customer_id masterdata.customer_id%type;
v_customer_account_type masterdata.customer_account_type%type;
v_product_id masterdata.product_id%type;
v_product_name masterdata.product_name%type;
v_supplier_id masterdata.supplier_id%type;
v_supplier_name masterdata.supplier_name%type;
v_outlet_id masterdata.outlet_id%type;
v_outlet_name masterdata.outlet_name%type;
v_sale_price masterdata.sale_price%type;
t_customer_id int;
t_supplier_id int;
t_product_id int;
t_outlet_id int;
t_date_id int;
t_sales_fact int;
BEGIN
rec := 1;
WHILE (rec <= 10000)
LOOP
record_100 := 'SELECT * FROM datastream WHERE datastream_id between '|| TO_CHAR(rec) ||
' and ' || TO_CHAR(rec+99);
OPEN v_cursor FOR record_100;
LOOP
FETCH v_cursor INTO v_cursor_records;
EXIT WHEN v_cursor%notfound;
SELECT product_id, product_name, supplier_id, supplier_name, sale_price
INTO v_product_id, v_product_name, v_supplier_id, v_supplier_name, v_sale_price
FROM masterdata
WHERE product_id = v_cursor_records.product_id;
SELECT COUNT(0)
INTO t_product_id
FROM product_dim
WHERE product_id = v_cursor_records.product_id;
IF t_product_id = 0 THEN
INSERT INTO product_dim(product_id, product_name)
VALUES (v_cursor_records.product_id, v_cursor_records.product_name);
END IF;
SELECT COUNT(0)
INTO t_customer_id
FROM customer_dim
WHERE customer_id = v_cursor_records.customer_id;
IF t_customer_id = 0 THEN
INSERT INTO customer_dim(customer_id, customer_name,customer_account_type)
VALUES (v_cursor_records.customer_name, v_cursor_records.customer_account_type, v_cursor_records.customer_account_type);
END IF;
SELECT COUNT(0)
INTO t_supplier_id
FROM supplier_dim
WHERE supplier_id = v_cursor_records.supplier_id;
IF t_supplier_id = 0 THEN
INSERT INTO supplier_dim(supplier_id, supplier_name)
VALUES (v_cursor_records.supplier_id, v_cursor_records.supplier_name);
END IF;
SELECT COUNT(0)
INTO t_outlet_id
FROM outlet_dim
WHERE outlet_id = v_cursor_records.outlet_id;
IF t_outlet_id = 0 THEN
INSERT INTO outlet_dim(outlet_id, outlet_name)
VALUES (v_cursor_records.outlet_id, v_cursor_records.outlet_name);
END IF;
SELECT COUNT(0)
INTO t_date_id
FROM date_dim
WHERE d_date = v_cursor_records.d_date;
IF t_date_id = 0 THEN
INSERT INTO date_dim(d_date, d_year, d_quater, d_month, d_day)
VALUES (v_cursor_records.d_date
,EXTRACT(year FROM v_cursor_records.d_date), TO_CHAR(v_cursor_records.d_date,'Q')
,EXTRACT(month FROM v_cursor_records.d_date)
,EXTRACT(day FROM v_cursor_records.d_date));
END IF;
SELECT COUNT(0)
INTO t_sales_fact
FROM sales_fact
WHERE product_id = v_cursor_records.product_id
AND customer_id = v_csr_rec.customer_id
AND supplier_id = v_csr_rec.supplier_id
AND outlet_id = v_csr_rec.outlet_id
AND d_date = v_csr_rec.d_date
AND sale_price = v_csr_rec.sale_price
AND quantity_sold = v_csr_rec.quantity_sold;
IF t_sales_fact = 0 THEN
INSERT INTO sales_fact(customer_id,product_id,outlet_id,supplier_id,d_date,sale_price,total_sale,quantity_sold)
VALUES (v_cursor_records.customer_id, v_cursor_records.product_id, v_cursor_records.outlet_id,v_cursor_records.supplier_id,
v_cursor_records.d_date, v_cursor_records.sale_price, v_cursor_records.quantity_sold*sale_price, v_cursor_records.quantity_sold)
END IF;
COMMIT;
END LOOP;
CLOSE v_cursor;
COMMIT;
rec := rec+100;
END LOOP;
END;
It is an unfortunate truth that occasionally procedural processing is required. But almost all this can be done with just SQL and a tiny bit of PL/SQL extensions. In particular there is no need to "select count..." for any of your target tables, sql handles that quite easily on the INSERT statement itself. Further there is no need to loop through a cursor on a row-by-row (aka slow-by-slow) process, instead use BULK COLLECT and FORALL to handle the entire array (100 rows in this case) all with a single INSERT for each table. With it there is no need to for loop control counters, nor calculating the ID numbers to retrieve, nor the exact the number of rows (what would happen if your source table contained 10050 or 9950 rows instead of exactly 10000). As a side effect you gain considerable performance. The following shows that process:
create or replace procedure transactioninlj as
k_bulk_buffer_size constant integer := 100;
cursor v_cursor is
select d.customer_id
, d.outlet_id
, d.outlet_name
, d.customer_name
, d.customer_account_type
, d.d_date
, d.quantity_sold
, m.product_id
, m.product_name
, m.supplier_id
, m.supplier_name
, m.sale_price
from datastream d
join masterdata m on m.product_id = d.product_id
;
type t_cursor_records is table of v_cursor%rowtype;
v_cursor_records t_cursor_records;
begin
open v_cursor;
loop
fetch v_cursor
bulk collect
into v_cursor_records
limit k_bulk_buffer_size;
forall v_index in 1 .. v_cursor_records.count
insert into product_dim(product_id, product_name)
select v_cursor_records(v_index).product_id
, v_cursor_records(v_index).product_name
from dual
where not exists
( select null
from product_dim
where product_id = v_cursor_records(v_index).product_id
);
forall v_index in 1 .. v_cursor_records.count
insert into supplier_dim(supplier_id, supplier_name)
select v_cursor_records(v_index).supplier_id
, v_cursor_records(v_index).supplier_name
from dual
where not exists
( select null
from supplier_dim
where supplier_id = v_cursor_records(v_index).supplier_id
);
forall v_index in 1 .. v_cursor_records.count
insert into customer_dim(customer_id, customer_name,customer_account_type)
select v_cursor_records(v_index).customer_id
, v_cursor_records(v_index).customer_name
, v_cursor_records(v_index).customer_account_type
from dual
where not exists
( select null
from customer_dim
where customer_id = v_cursor_records(v_index).customer_id
);
forall v_index in 1 .. v_cursor_records.count
insert into outlet_dim(outlet_id, outlet_name)
select v_cursor_records(v_index).outlet_id
, v_cursor_records(v_index).outlet_name
from dual
where not exists
( select null
from outlet_dim
where outlet_id = v_cursor_records(v_index).outlet_id
);
forall v_index in 1 .. v_cursor_records.count
insert into date_dim(d_date, d_year, d_quater, d_month, d_day)
select v_cursor_records(v_index).d_date
, extract(year from v_cursor_records(v_index).d_date)
, to_char(v_cursor_records(v_index).d_date,'Q')
, extract(month from v_cursor_records(v_index).d_date)
, extract(day from v_cursor_records(v_index).d_date)
from dual
where not exists
( select null
from outlet_dim
where outlet_id = v_cursor_records(v_index).outlet_id
);
forall v_index in 1 .. v_cursor_records.count
insert into sales_fact( customer_id
, product_id
, outlet_id
, supplier_id
, d_date
, sale_price
, total_sale
, quantity_sold
)
select v_cursor_records(v_index).customer_id
, v_cursor_records(v_index).product_id
, v_cursor_records(v_index).outlet_id
, v_cursor_records(v_index).supplier_id
, v_cursor_records(v_index).d_date
, v_cursor_records(v_index).sale_price
, v_cursor_records(v_index).quantity_sold
* v_cursor_records(v_index).sale_price
, v_cursor_records(v_index).quantity_sold
from dual
where not exists
( select null
from sales_fact
where product_id = v_cursor_records(v_index).product_id
and customer_id = v_cursor_records(v_index).customer_id
and supplier_id = v_cursor_records(v_index).supplier_id
and outlet_id = v_cursor_records(v_index).outlet_id
and d_date = v_cursor_records(v_index).d_date
and sale_price = v_cursor_records(v_index).sale_price
and quantity_sold = v_cursor_records(v_index).quantity_sold
);
exit when v_cursor_records.count < k_bulk_buffer_size;
end loop;
close v_cursor;
commit;
end transactioninlj;
Note: The DDL for the source tables is not included in your post so I had to "invent" the definition for DATASTREAM. However, you only have 2 source inputs: DATASTREAM and MASTERDATA. Since you only select 5 columns from masterdata, every thing else must come from datastream.
I need to write a procedure that will insert thousands of rows in a table and use the auto generated id resulted from these rows and use it in other inserts.
I used a for loop in which I save the sequence id in a variable then use it in my inserts.
declare
first_id integer;
BEGIN
FOR texts in (select distinct text from table_texts )
LOOP
first_id := SEQ_IDS_OBJECTID.NEXTVAL;
INSERT INTO table_1(id,some_fields)
VALUES (first_id, 'blablabla');
insert into table_2 (id,text_field)
VALUES (first_id, texts.text);
END LOOP;
commit;
END;
I think that this is not the ideal way to achieve what I need. Also when I enter the code in TOAD , I get the following warning :
Rule 4809 (A loop that contains DML statements should be refactored to use BULK COLLECT and FORALL)
Is there better way to do it?
EDIT:
the above code was simplified. But I think I have to expose more of it to explain the case :
declare
first_id integer;
second_id integer;
BEGIN
FOR texts in (select distinct text1 , text2 from mdf )
LOOP
first_id := XAKTA.SEQ_IDS_OBJECTID.NEXTVAL;
select id_1 into second_id from table_3 where field_1 =texts.text1 ;
INSERT INTO table_1(id_1,id_2,some_fields)
VALUES (first_id ,second_id ,'blablabla');
insert into table_2 (id,text1,text2)
VALUES (first_id, texts.text1,texts.text2);
END LOOP;
commit;
END;
You can use FORALL to insert batches of items from your cursor:
DECLARE
TYPE texts_tab IS TABLE OF table_texts.text%TYPE;
TYPE ids_tab IS TABLE OF table_2.id%TYPE;
p_texts texts_tab;
p_ids ids_tab;
CURSOR c IS
SELECT DISTINCT text FROM table_texts;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO p_texts LIMIT 100;
FORALL i IN 1 .. p_texts.COUNT
INSERT INTO table_2 ( id, text_field )
VALUES ( SEQ_IDS_OBJECTID.NEXTVAL, p_texts(i) )
RETURNING id BULK COLLECT INTO p_ids;
FORALL i IN 1 .. p_ids.COUNT
INSERT INTO table_1( id, some_fields )
VALUES ( p_ids(i), 'blablabla' );
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE c;
COMMIT;
END;
/
db<>fiddle here
I have two tables Table1 an dTable2 that have identical columns. I need to check if a particular id is in one of them and return the row of data from whichever table.
I have the following PL/SQL code:
v_result Table1%ROWTYPE;
BEGIN
SELECT a.*
INTO v_result
FROM Table1 a
WHERE a.id = 123;
EXCEPTION
WHEN NO_DATA_FOUND THEN -- when record not found
SELECT b.*
INTO v_result
FROM Table2 b
WHERE b.id = 123;
END;
The issue is that the exception does not get thrown, so v_result returns no data. How can I check v_result for the number of rows?
For cursor I can use ROWCOUNT but v_result is not a cursor.
I also tried using count property but it errored out.
I changed my code to:
v_result Table1%ROWTYPE;
BEGIN
SELECT a.*
INTO v_result
FROM Table1 a
WHERE a.id = 123;
if v_result.count =0 then
SELECT b.*
INTO v_result
FROM Table2 b
WHERE b.id = 123;
end if;
EXCEPTION
WHEN NO_DATA_FOUND THEN -- when record not found
SELECT b.*
INTO v_result
FROM Table2 b
WHERE b.id = 123;
END;
And got an error component 'count' must be declared
What am I doing wrong?
You may use only a single row in a record variable. If you want to store and count multiple rows, you may define a collection of records and use BULK COLLECT INTO to load all of them at once and it won't raise a NO_DATA_FOUND. The count function works on collections.
DECLARE
TYPE type_tab1 IS TABLE OF Table1%ROWTYPE;
TYPE type_tab2 IS TABLE OF Table2%ROWTYPE;
v_result1 type_tab1;
v_result2 type_tab2;
BEGIN
SELECT a.*
BULK COLLECT INTO v_result1
FROM Table1 a
WHERE a.id = 123;
if v_result1.count = 0 then
SELECT b.* BULK COLLECT
INTO v_result2
FROM Table2 b
WHERE b.id = 123;
end if;
DBMS_OUTPUT.PUT_LINE('v_result1 ='|| v_result1.count);
DBMS_OUTPUT.PUT_LINE('v_result2 ='|| v_result2.count);
END;
/
Output for the Demo
v_result1 =0
v_result2 =1
If your intention is to simply check if a row exists, then a simpler and efficient approach would be to use EXISTS
SELECT
CASE WHEN
EXISTS (
SELECT 1
FROM table1
WHERE id = 123
) THEN 1
ELSE 0
END
INTO v_count
FROM dual;
IF v_count = 0
THEN
..
..
Is it possible to do something like this in pl/sql for bulk insert using FORALL?
TYPE c_type1 IS RECORD
(
column1 table1.column1%TYPE,
column2 table1.column2%TYPE,
client table2.client%TYPE
);
type1 c_type1;
CURSOR cur_t IS select * BULK COLLECT INTO recs from table3 ;
begin
FOR recs IN cur_t
LOOP
SELECT * INTO type1 FROM (select a.column1, a.column2,imm.client
...
from table1 a, table2 imm
WHERE
a.column1 = recs.column1
) WHERE ROWNUM=1;
INSERT INTO table2 values (recs.column1,type1.column2);
...
P.S : There are more 80 columns to be inserted.
Your question is not pretty clear but looking at your code I have the following. Check if this is what you were looking for.
declare
CURSOR cur_t IS
select t3.column1 , t1.column2
from table3 t3
inner join table1 t1
on t3.column1 = t1.column1;
type var_cur is table of cur_t%rowtype;
var var_cur;
begin
open cur_t;
LOOP
FETCH cur_t bulk collect into var limit 100;
EXIT WHEN cur_t%NOTFOUND;
FORALL i IN 1 .. var.count SAVE EXCEPTIONS
INSERT INTO TABLE2
VALUES var(i);
END LOOP;
CLOSE distinctUserIdCursor;
COMMIT;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('Error in Insertion of record' || '~~~~' || SQLERRM);
FOR indx IN 1 .. SQL%BULK_EXCEPTIONS.COUNT
LOOP
DBMS_OUTPUT.put_line (SQL%BULK_EXCEPTIONS (indx).ERROR_INDEX|| ': '
|| SQL%BULK_EXCEPTIONS (indx).ERROR_CODE);
END LOOP;
end;
Another PL/SQL refactoring question!
I have several cursors that are of the general simplified form:
cursor_1 is
with X as (select col1, col2 from TAB where col1 = '1'),
Y as (select col1, col2 from TAB where col2 = '3'),
/*main select*/
select count(X.col1), ...
from X inner join Y on...
group by rollup (X.col1, ...
cursor_2 is
with X as (select col1, col2 from TAB where col1 = '7' and col2 = '9' and col3 = 'TEST'),
Y as (select col1, col2 from TAB where col3 = '6'),
/*main select*/
select count(X.col1), ...
from X inner join Y on...
group by rollup (X.col1, ...
cursor_2 is
with X as (select col1, col2 from TAB where col1 IS NULL ),
Y as (select col1, col2 from TAB where col2 IS NOT NULL ),
/*main select*/
select count(X.col1), ...
from X inner join Y on...
group by rollup (X.col1, ...
...
begin
for r in cursor_1 loop
print_report_results(r);
end loop;
for r in cursor_2 loop
print_report_results(r);
end loop;
...
end;
Basically, all of these cursors (there's more than 3) are the same summary/reporting queries. The difference is in the factored subqueries. There are always 2 factored subqueries, "X" and "Y", and they always select the same columns to feed into the main reporting query.
The problem is that the main reporting query is VERY large, about 70 lines. This itself isn't so bad, but it was copy-pasted for ALL of the reporting queries (I think there's over a dozen).
Since the only difference is in the factored subqueries (and they all return the same columns, it's really just a difference in the tables they select from and their conditions) I was hoping to find a way to refactor all this so that there is ONE query for the giant report and smaller ones for the various factored subqueries so that when changes are made to the way the report is done, I only have to do it in one place, not a dozen. Not to mention a much easier-to-navigate (and read) file!
I just don't know how to properly refactor something like this. I was thinking pipelined functions? I'm not sure they're appropriate for this though, or if there's a simpler way...
On the other hand, I also wonder if performance would be significantly worse by splitting out the reporting query. Performance (speed) is an issue for this system. I'd rather not introduce changes for developer convenience if it adds significant execution time.
I guess what I'd ultimately like is something that looks sort of like this (I'm just not sure how to do this so that it will actually compile):
cursor main_report_cursor (in_X, in_Y) is
with X as (select * from in_X),
Y as (select * from in_Y)
/*main select*/
select count(X.col1), ...
from X inner join Y on...
group by rollup (X.col1, ...
cursor x_1 is
select col1, col2 from TAB where col1 = '1';
cursor y_1 is
select col1, col2 from TAB where col2 = '3'
...
begin
for r in main_report_cursor(x_1,y_1) loop
print_report_results(r);
end loop;
for r in main_report_cursor(x_2,y_2) loop
print_report_results(r);
end loop;
...
(Using Oracle 10g)
Use a pipelined function. For example:
drop table my_tab;
create table my_tab
(
col1 number,
col2 varchar2(10),
col3 char(1)
);
insert into my_tab values (1, 'One', 'X');
insert into my_tab values (1, 'One', 'Y');
insert into my_tab values (2, 'Two', 'X');
insert into my_tab values (2, 'Two', 'Y');
insert into my_tab values (3, 'Three', 'X');
insert into my_tab values (4, 'Four', 'Y');
commit;
-- define types
create or replace package refcur_pkg is
--type people_tab is table of people%rowtype;
type my_subquery_tab is table of my_tab%rowtype;
end refcur_pkg;
Create the function pipelined
-- create pipelined function
create or replace function get_tab_data(p_cur_num in number, p_cur_type in char)
return REFCUR_PKG.my_subquery_tab pipelined
IS
v_ret REFCUR_PKG.my_subquery_tab;
begin
if (p_cur_num = 1) then
if (upper(p_cur_type) = 'X') then
for rec in (select * from my_tab where col1=1 and col3='X')
loop
pipe row(rec);
end loop;
elsif (upper(p_cur_type) = 'Y') then
for rec in (select * from my_tab where col1=1 and col3='Y')
loop
pipe row(rec);
end loop;
else
return;
end if;
elsif (p_cur_num = 2) then
if (upper(p_cur_type) = 'X') then
for rec in (select * from my_tab where col1=2 and col3='X')
loop
pipe row(rec);
end loop;
elsif (upper(p_cur_type) = 'Y') then
for rec in (select * from my_tab where col1=2 and col3='Y')
loop
pipe row(rec);
end loop;
else
return;
end if;
end if;
return;
end;
MAIN procedure example
-- main procedure/usage
declare
cursor sel_cur1 is
with X as (select * from table(get_tab_data(1, 'x'))),
Y as (select * from table(get_tab_data(1, 'y')))
select X.col1, Y.col2 from X,Y where X.col1 = Y.col1;
begin
for rec in sel_cur1
loop
dbms_output.put_line(rec.col1 || ',' || rec.col2);
end loop;
end;
All of your various subqueries are reduced to a call to a single pipelined function, which determines the rows to return.
EDIT:
To combine all needed types and functions into 1 procedure, and also to use variables for subquery function parameters, I'm adding the following example:
create or replace procedure my_pipe
IS
-- define types
type my_subquery_tab is table of my_tab%rowtype;
type ref_cur_t is ref cursor;
v_ref_cur ref_cur_t;
-- define vars
v_with_sql varchar2(4000);
v_main_sql varchar2(32767);
v_x1 number;
v_x2 char;
v_y1 number;
v_y2 char;
v_col1 my_tab.col1%type;
v_col2 my_tab.col2%type;
-- define local functions/procs
function get_tab_data(p_cur_num in number, p_cur_type in char)
return my_subquery_tab pipelined
IS
v_ret my_subquery_tab;
begin
if (p_cur_num = 1) then
if (upper(p_cur_type) = 'X') then
for rec in (select * from my_tab where col1=1 and col3='X')
loop
pipe row(rec);
end loop;
elsif (upper(p_cur_type) = 'Y') then
for rec in (select * from my_tab where col1=1 and col3='Y')
loop
pipe row(rec);
end loop;
else
return;
end if;
elsif (p_cur_num = 2) then
if (upper(p_cur_type) = 'X') then
for rec in (select * from my_tab where col1=2 and col3='X')
loop
pipe row(rec);
end loop;
elsif (upper(p_cur_type) = 'Y') then
for rec in (select * from my_tab where col1=2 and col3='Y')
loop
pipe row(rec);
end loop;
else
return;
end if;
end if;
return;
end;
BEGIN
---------------------------------
-- Setup SQL for cursors
---------------------------------
-- this will have different parameter values for subqueries
v_with_sql := q'{
with X as (select * from table(get_tab_data(:x1, :x2))),
Y as (select * from table(get_tab_data(:y1, :y2)))
}';
-- this will stay the same for all cursors
v_main_sql := q'{
select X.col1, Y.col2 from X,Y where X.col1 = Y.col1
}';
---------------------------------
-- set initial subquery parameters
---------------------------------
v_x1 := 1;
v_x2 := 'x';
v_y1 := 1;
v_y2 := 'y';
open v_ref_cur for v_with_sql || v_main_sql using v_x1, v_x2, v_y1, v_y2;
loop
fetch v_ref_cur into v_col1, v_col2;
exit when v_ref_cur%notfound;
dbms_output.put_line(v_col1 || ',' || v_col2);
end loop;
close v_ref_cur;
---------------------------------
-- change subquery parameters
---------------------------------
v_x1 := 2;
v_x2 := 'x';
v_y1 := 2;
v_y2 := 'y';
open v_ref_cur for v_with_sql || v_main_sql using v_x1, v_x2, v_y1, v_y2;
loop
fetch v_ref_cur into v_col1, v_col2;
exit when v_ref_cur%notfound;
dbms_output.put_line(v_col1 || ',' || v_col2);
end loop;
close v_ref_cur;
end;
Note the benefit now is that even if you have many different cursors, you only need to define the main query and subquery SQL once. After that, you're just changing variables.
Cheers
--Create views that will be replaced by common table expressions later.
--The column names have to be the same, the actual content doesn't matter.
create or replace view x as select 'wrong' col1, 'wrong' col2 from dual;
create or replace view y as select 'wrong' col1, 'wrong' col2 from dual;
--Put the repetitive logic in one view
create or replace view main_select as
select count(x.col1) total, x.col2
from X inner join Y on x.col1 = y.col1
group by rollup (x.col1);
--Just querying the view produces the wrong results
select * from main_select;
--But when you add the common table expressions X and Y they override
--the dummy views and produce the real results.
declare
cursor cursor_1 is
with X as (select 'right' col1, 'right' col2 from dual),
Y as (select 'right' col1, 'right' col2 from dual)
select total, col2 from main_select;
--... repeat for each cursor, just replace X and Y as necessary
begin
for r in cursor_1 loop
dbms_output.put_line(r.col2);
end loop;
null;
end;
/
This solution is a little weirder than the pipelined approach, and requires 3 new objects for the views, but it will probably run faster
since there is less context switching between SQL and PL/SQL.
One possibility you could consider is using 2 Global Temporary Tables (GTTs) for X and Y. Then you just need one cursor, but you have to clear and re-populate the 2 GTTs several times - and if data volumes are large you may want to get optimiser stats on the GTTs each time too.
This is the sort of thing I mean:
cursor_gtt is
select count(X.col1), ...
from GTT_X inner join GTT_Y on...
group by rollup (X.col1, ...
begin
insert into gtt_x select col1, col2 from TAB where col1 = '1';
insert into gtt_y select col1, col2 from TAB where col2 = '3';
-- maybe get stats for gtt_x and gtt_y here
for r in cursor_gtt loop
print_report_results(r);
end loop;
delete gtt_x;
delete gtt_y;
insert into gtt_x select col1, col2 from TAB where col1 = '7' and col2 = '9' and col3 = 'TEST';
insert into gtt_y select col1, col2 from TAB where col3 = '6'
-- maybe get stats for gtt_x and gtt_y here
for r in cursor_gtt loop
print_report_results(r);
end loop;
...
end;
So the same 2 GTTs are re-populated and the same cursor is used each time.
What about creating a view for the main query? That pretties up your code and centralizes the main query to boot.