Using EXECUTE IMMEDIATE based on entries of table - oracle

I (using Oracle 12c, PL/SQL) need to update an existing table TABLE1 based on information stored in a table MAP. In a simplified version, MAP looks like this:
COLUMN_NAME
MODIFY
COLUMN1
N
COLUMN2
Y
COLUMN3
N
...
...
COLUMNn
Y
COLUMN1 to COLUMNn are column names in TABLE1 (but there are more columns, not just these). Now I need to update a column in TABLE1 if MODIFY in table MAP contains a 'Y' for that columns' name. There are other row conditions, so what I would need would be UPDATE statements of the form
UPDATE TABLE1
SET COLUMNi = value_i
WHERE OTHER_COLUMN = 'xyz_i';
where COLUMNi runs through all the columns of TABLE1 which are marked with MODIFY = 'Y' in MAP. value_i and xyz_i also depend on information stored in MAP (not displayed in the example).
The table MAP is not static but changes, so I do not know in advance which columns to update. What I did so far is to generate the UPDATE-statements I need in a query from MAP, i.e.
SELECT <Text of UPDATE-STATEMENT using row information from MAP> AS SQL_STMT
FROM MAP
WHERE MODIFY = 'Y';
Now I would like to execute these statements (possibly hundreds of rows). Of course I could just copy the contents of the query into code and execute, but is there a way to do this automatically, e.g. using EXECUTE IMMEDIATE? It could be something like
BEGIN
EXECUTE IMMEDIATE SQL_STMT USING 'xyz_i';
END;
only that SQL_STMT should run through all the rows of the previous query (and 'xyz_i' varies with the row as well). Any hints how to achieve this or how one should approach the task in general?
EDIT: As response to the comments, a bit more background how this problem emerges. I receive an empty n x m Matrix (empty except row and column names, think of them as first row and first column) quarterly and need to populate the empty fields from another process.
The structure of the initial matrix changes, i.e. there may be new/deleted columns/rows and existing columns/rows may change their position in the matrix. What I need to do is to take the old version of the matrix, where I already have filled the empty spaces, and translate this into the new version. Then, the populating process merely looks if entries have changed and if so, alters them.
The situation from the question arises after I have translated the old version into the new one, before doing the delta. The new matrix, populated with the old information, is TABLE1. The delta process, over which I have no control, gives me column names and information to be entered into the cells of the matrix (this is table MAP). So I need to find the column in the matrix labeled by the delta process and then to change values in rows (which ones is specified via other information provided by the delta process)

Dynamic SQL it is; here's an example, see if it helps.
This is a table whose contents should be modified:
SQL> select * from test order by id;
ID NAME SALARY
---------- ---------- ----------
1 Little 100
2 200
3 Foot 0
4 0
This is the map table:
SQL> select * from map;
COLUMN CB_MODIFY VALUE WHERE_CLAUSE
------ ---------- ----- -------------
NAME Y Scott where id <= 3
SALARY N 1000 where 1 = 1
Procedure loops through all columns that are set to be modified, composes the dynamic update statement and executes it:
SQL> declare
2 l_str varchar2(1000);
3 begin
4 for cur_r in (select m.column_name, m.value, m.where_clause
5 from map m
6 where m.cb_modify = 'Y'
7 )
8 loop
9 l_str := 'update test set ' ||
10 cur_r.column_name || ' = ' || chr(39) || cur_r.value || chr(39) || ' ' ||
11 cur_r.where_clause;
12 execute immediate l_str;
13 end loop;
14 end;
15 /
PL/SQL procedure successfully completed.
Result:
SQL> select * from test order by id;
ID NAME SALARY
---------- ---------- ----------
1 Scott 100
2 Scott 200
3 Scott 0
4 0
SQL>

Related

ORACLE apex - looping through checkbox items using PL/SQL

I have a checkbox on my page P3_Checkbox1 that is populated from the database. How can I loop through all the checked values of my checkbox in PL/SQL code?
I assume that APEX_APPLICATION.G_F01 is used only by sql generated checkboxes and I cannot use it for regular checbox as it would not populate G_Fxx array.
I think I understand now what you're saying.
For example, suppose that the P3_CHECKBOX1 checkbox item allows 3 values (display/return):
Absent: -1
Unknown: 0
Here: 1
I created a button (which will just SUBMIT the page) and two text items which will display checked values: P3_CHECKED_VALUES and P3_CHECKED_VALUES_2.
Then I created a process which fires when the button is pressed. The process looks like this (read the comments as well, please):
begin
-- This is trivial; checked (selected) values are separated by a colon sign,
-- just like in a Shuttle item
:P3_CHECKED_VALUES := :P3_CHECKBOX1;
-- This is what you might be looking for; it uses the APEX_STRING.SPLIT function
-- which splits selected values (the result is ROWS, not a column), and these
-- values can then be used in a join or anywhere else, as if it was result of a
-- subquery. The TABLE function is used as well.
-- LISTAGG is used just to return a single value so that I wouldn't have to worry
-- about TOO-MANY-ROWS error.
with
description (code, descr) as
(select -1, 'Absent' from dual union all
select 0 , 'Unknown' from dual union all
select 1 , 'Here' from dual),
desc_join as
(select d.descr
from description d join (select * from table(apex_string.split(:P3_CHECKED_VALUES, ':'))) s
on d.code = s.column_value
)
select listagg(j.descr, ' / ') within group (order by null)
into :P3_CHECKED_VALUES_2
from desc_join j;
end;
Suppose that the Absent and Unknown values have been checked. The result of that PL/SQL process is:
P3_CHECKED_VALUES = -1:0
P3_CHECKED_VALUES_2 = Absent / Unknown
You can rewrite it as you want; this is just one example.
Keywords:
APEX_STRING.SPLIT
TABLE function
I hope it'll help.
[EDIT: loop through selected values]
Looping through those values isn't difficult; you'd do it as follows (see the cursor FOR loop):
declare
l_dummy number;
begin
for cur_r in (select * From table(apex_string.split(:P3_CHECKED_VALUES, ':')))
loop
select max(1)
into l_dummy
from some_table where some_column = cur_r.column_value;
if l_dummy is null then
-- checked value does not exist
insert into some_table (some_column, ...) valued (cur_r.column_value, ...);
else
-- checked value exists
delete from some_table where ...
end if;
end loop;
end;
However, I'm not sure what you meant by saying that a "particular combination is in the database". Does it mean that you are storing colon-separated values into a column in that table? If so, the above code won't work either because you'd compare, for example
-1 to 0:-1 and then
0 to 0:-1
which won't be true (except in the simplest cases, when checked and stored values have only one value).
Although Apex' "multiple choices" look nice, they can become a nightmare when you have to actually do something with them (like in your case).
Maybe you should first sort checkbox values, sort database values, and then compare those two strings.
It means that LISTAGG might once again become handy, such as
listagg(j.descr, ' / ') within group (order by j.descr)
Database values could be sorted this way:
SQL> with test (col) as
2 (select '1:0' from dual union all
3 select '1:-1:0' from dual
4 ),
5 inter as
6 (select col,
7 regexp_substr(col, '[^:]+', 1, column_value) token
8 from test,
9 table(cast(multiset(select level from dual
10 connect by level <= regexp_count(col, ':') + 1
11 ) as sys.odcinumberlist))
12 )
13 select
14 col source_value,
15 listagg(token, ':') within group (order by token) sorted_value
16 from inter
17 group by col;
SOURCE SORTED_VALUE
------ --------------------
1:-1:0 -1:0:1
1:0 0:1
SQL>
Once you have them both sorted, you can compare them and either INSERT or DELETE a row.

Update or Insert on a Table based on a column value

I have two tables BASE and DAILY as shown below:
BASE
Cust ID IP address
1 10.5.5.5
2 10.5.5.50
3 10.5.5.6
DAILY
Cust ID IP address
1 10.5.5.5
2 10.5.5.70
4 10.5.5.67
The table DAILY is periodically refreshed every 24 hours. Now for every Cust Id in BASE I have to check if the IP address is modified in DAILY. If yes then update the row in BASE.
All the new entries in DAILY have to be inserted into BASE.
I have tried this using a Cursor comparing and then updating and then another cursor for insertion.
But it is taking lot of time.
What is the best possible way to do this?
You could also use MERGE depending on your database system.
SQL Server syntax would be
MERGE INTO BASE B
USING DAILY D
ON D.CustId = B.CustId
WHEN NOT MATCHED THEN
INSERT (CustId, Ip) VALUES (D.CustId, D.Ip)
WHEN MATCHED AND D.Ip <> B.Ip THEN
UPDATE SET B.Ip = D.Ip;
Oracle PL/SQL syntax seems to be much the same, take a look here
If you just want to update all your BASE table, use an UPDATE to update all the rows in your BASE table.
UPDATE `BASE`
SET `IP address` = (SELECT `IP address`
FROM DAILY
WHERE DAILY.`Cust ID` = `BASE`.`Cust ID`);
Then, use this INSERT INTO query to insert new values that not exists in your table BASE.
INSERT INTO `BASE`
SELECT `Cust ID`, `IP address`
FROM DAILY
WHERE DAILY.`Cust ID` NOT IN (SELECT `Cust ID` FROM BASE);
SQL>
declare
begin
for i in (select * from daily where ip_add not in (select ip_add from base))
loop
update base set ip_add=i.ip_add where custid=i.custid;
end loop;
end;
PL/SQL procedure successfully completed.
SQL> select * from base;
CUSTID IP_ADD
---------- ----------
1 10..5.5.5
2 10..5.5.20 -- updated value from base where ip_add is different
3 10..5.5.6
SQL> select * from base ;
CUSTID IP_ADD
---------- ----------
1 10..5.5.5
2 10..5.5.20
4 10..5.5.62
SQL>

Create PL/SQL script with 2 cursors, a parameter and give results from a table?

I need to create a script that puts a key number from table A (which will be used as a parameter later), then flow that parameter or key number into a query and then dump those results into a holding record or table for later manipulation and such. Because each fetch has more than 1 row (in reality there are 6 rows per query results or per claim key) I decided to use the Bulk Collect clause. Though my initial test on a different database worked, I have not yet figured out why the real script is not working.
Here is the test script that I used:
DECLARE
--Cursors--
CURSOR prod_id is select distinct(product_id) from product order by 1 asc;
CURSOR cursorValue(p_product_id NUMBER) IS
SELECT h.product_description,o.company_short_name
FROM company o,product h
WHERE o.product_id =h.product_id
AND h.product_id =p_product_id
AND h.product_id IS NOT NULL
ORDER by 2;
--Table to store Cursor data--
TYPE indx IS TABLE OF cursorValue%ROWTYPE
INDEX BY PLS_INTEGER;
indx_tab indx;
---Variable objects---
TotalIDs PLS_INTEGER;
TotalRows PLS_INTEGER := 0 ;
BEGIN
--PARAMETER CURSOR RUNS---
FOR prod_id2 in prod_id LOOP
dbms_output.put_line('Product ID: ' || prod_id2.product_id);
TotalIDs := prod_id%ROWCOUNT;
--FLOW PARAMETER TO SECOND CURSOR--
Open cursorValue(prod_id2.product_id);
Loop
Fetch cursorValue Bulk collect into indx_tab;
---data dump into table---
--dbms_output.put_line('PROD Description: ' || indx_tab.product_description|| ' ' ||'Company Name'|| indx_tab.company_short_name);
TotalRows := TotalRows + cursorValue%ROWCOUNT;
EXIT WHEN cursorValue%NOTFOUND;
End Loop;
CLOSE cursorValue;
End Loop;
dbms_output.put_line('Product ID Total: ' || TotalIDs);
dbms_output.put_line('Description Rows: ' || TotalRows);
END;
Test Script Results:
anonymous block completed
Product ID: 1
Product ID: 2
Product ID: 3
Product ID: 4
Product ID: 5
Product ID Total: 5
Description Rows: 6
Update: Marking question as "answered" Thanks.
The first error is on line 7. On line 4 you have:
CURSOR CUR_CLAIMNUM IS
SELECT DISTINCT(CLAIM_NO)FROM R7_OPENCLAIMS;
... and that seems to be valid, so your column name is CLAIM_NO. On line 7:
CURSOR OPEN_CLAIMS (CLAIM_NUM R7_OPENCLAIMS.CLAIM_NUM%TYPE) IS
... so you've mistyped the column name as CLAIM_NUM, which doesn't exist in that table. Which is what the error message is telling you, really.
The other errors are because the cursor is invalid, becuase of that typo.
When you open the second cursor you have the same name confusion:
OPEN OPEN_CLAIMS (CUR_CLAIMNUM2.CLAIM_NUM);
... which fails because the cursor is querying CLAIMNO not CLAIMNUM; except here it's further confused by the distinct. You haven't aliased the column name so Oracle applies one, which you could refer to, but it's simpler to add your own:
CURSOR CUR_CLAIMNUM IS
SELECT DISTINCT(CLAIM_NO) AS CLAIM_NO FROM R7_OPENCLAIMS;
and then
OPEN OPEN_CLAIMS (CUR_CLAIMNUM2.CLAIM_NO);
But I'd suggest you also change the cursor name from CUR_CLAIMNUM to CUR_CLAIM_NO, both in the definition and the loop declaration. And having the cursor iterator called CUR_CLAIMNUM2 is odd as it suggests that is itself a cursor name; maybe something like ROW_CLAIM_NO would be clearer.

Fastest way of doing field comparisons in the same table with large amounts of data in oracle

I am recieving information from a csv file from one department to compare with the same inforation in a different department to check for discrepencies (About 3/4 of a million rows of data with 44 columns in each row). After I have the data in a table, I have a program that will take the data and send reports based on a HQ. I feel like the way I am going about this is not the most efficient. I am using oracle for this comparison.
Here is what I have:
I have a vb.net program that parses the data and inserts it into an extract table
I run a procedure to do a full outer join on the two tables into a new table with the fields in one department prefixed with '_c'
I run another procedure to compare the old/new data and update 2 different tables with detail and summary information. Here is code from inside the procedure:
DECLARE
CURSOR Cur_Comp IS SELECT * FROM T.AEC_CIS_COMP;
BEGIN
FOR compRow in Cur_Comp LOOP
--If service pipe exists in CIS but not in FM and the service pipe has status of retired in CIS, ignore the variance
If(compRow.pipe_num = '' AND cis_status_c = 'R')
continue
END IF
--If there is not a summary record for this HQ in the table for this run, create one
INSERT INTO t.AEC_CIS_SUM (HQ, RUN_DATE)
SELECT compRow.HQ, to_date(sysdate, 'DD/MM/YYYY') from dual WHERE NOT EXISTS
(SELECT null FROM t.AEC_CIS_SUM WHERE HQ = compRow.HQ AND RUN_DATE = to_date(sysdate, 'DD/MM/YYYY'))
-- Check fields and update the tables accordingly
If (compRow.cis_loop <> compRow.cis_loop_c) Then
--Insert information into the details table
INSERT INTO T.AEC_CIS_DET( Fac_id, Pipe_Num, Hq, Address, AutoUpdatedFl,
DateTime, Changed_Field, CIS_Value, FM_Value)
VALUES(compRow.Fac_ID, compRow.Pipe_Num, compRow.Hq, compRow.Street_Num || ' ' || compRow.Street_Name,
'Y', sysdate, 'Cis_Loop', compRow.cis_loop, compRow.cis_loop_c);
-- Update information into the summary table
UPDATE AEC_CIS_SUM
SET cis_loop = cis_loop + 1
WHERE Hq = compRow.Hq
AND Run_Date = to_date(sysdate, 'DD/MM/YYYY')
End If;
END LOOP;
END;
Any suggestions of an easier way of doing this rather than an if statement for all 44 columns of the table? (This is run once a week if it matters)
Update: Just to clarify, there are 88 columns of data (44 of duplicates to compare with one suffixed with _c). One table lists each field in a row that is different so one row can mean 30+ records written in that table. The other table keeps tally of the number of discrepencies for each week.
First of all I believe that your task can be implemented (and should be actually) with staight SQL. No fancy cursors, no loops, just selects, inserts and updates. I would start with unpivotting your source data (it is not clear if you have primary key to join two sets, I guess you do):
Col0_PK Col1 Col2 Col3 Col4
----------------------------------------
Row1_val A B C D
Row2_val E F G H
Above is your source data. Using UNPIVOT clause we convert it to:
Col0_PK Col_Name Col_Value
------------------------------
Row1_val Col1 A
Row1_val Col2 B
Row1_val Col3 C
Row1_val Col4 D
Row2_val Col1 E
Row2_val Col2 F
Row2_val Col3 G
Row2_val Col4 H
I think you get the idea. Say we have table1 with one set of data and the same structured table2 with the second set of data. It is good idea to use index-organized tables.
Next step is comparing rows to each other and storing difference details. Something like:
insert into diff_details(some_service_info_columns_here)
select some_service_info_columns_here_along_with_data_difference
from table1 t1 inner join table2 t2
on t1.Col0_PK = t2.Col0_PK
and t1.Col_name = t2.Col_name
and nvl(t1.Col_value, 'Dummy1') <> nvl(t2.Col_value, 'Dummy2');
And on the last step we update difference summary table:
insert into diff_summary(summary_columns_here)
select diff_row_id, count(*) as diff_count
from diff_details
group by diff_row_id;
It's just rough draft to show my approach, I'm sure there is much more details should be taken into account. To summarize I suggest two things:
UNPIVOT data
Use SQL statements instead of cursors
You have several issues in your code:
If(compRow.pipe_num = '' AND cis_status_c = 'R')
continue
END IF
"cis_status_c" is not declared. Is it a variable or a column in AEC_CIS_COMP?
In case it is a column, just put the condition into the cursor, i.e. SELECT * FROM T.AEC_CIS_COMP WHERE not (compRow.pipe_num = '' AND cis_status_c = 'R')
to_date(sysdate, 'DD/MM/YYYY')
That's nonsense, you convert a date into a date, simply use TRUNC(SYSDATE)
Anyway, I think you can use three single statements instead of a cursor:
INSERT INTO t.AEC_CIS_SUM (HQ, RUN_DATE)
SELECT comp.HQ, trunc(sysdate)
from AEC_CIS_COMP comp
WHERE NOT EXISTS
(SELECT null FROM t.AEC_CIS_SUM WHERE HQ = comp.HQ AND RUN_DATE = trunc(sysdate));
INSERT INTO T.AEC_CIS_DET( Fac_id, Pipe_Num, Hq, Address, AutoUpdatedFl, DateTime, Changed_Field, CIS_Value, FM_Value)
select comp.Fac_ID, comp.Pipe_Num, comp.Hq, comp.Street_Num || ' ' || comp.Street_Name, 'Y', sysdate, 'Cis_Loop', comp.cis_loop, comp.cis_loop_c
from T.AEC_CIS_COMP comp
where comp.cis_loop <> comp.cis_loop_c;
UPDATE AEC_CIS_SUM
SET cis_loop = cis_loop + 1
WHERE Hq IN (Select Hq from T.AEC_CIS_COMP)
AND trunc(Run_Date) = trunc(sysdate);
They are not tested but they should give you a hint how to do it.

ways to avoid global temp tables in oracle

We just converted our sql server stored procedures to oracle procedures. Sql Server SP's were highly dependent on session tables (INSERT INTO #table1...) these tables got converted as global temporary tables in oracle. We ended up with aroun 500 GTT's for our 400 SP's
Now we are finding out that working with GTT's in oracle is considered a last option because of performance and other issues.
what other alternatives are there? Collections? Cursors?
Our typical use of GTT's is like so:
Insert into GTT
INSERT INTO some_gtt_1
(column_a,
column_b,
column_c)
(SELECT someA,
someB,
someC
FROM TABLE_A
WHERE condition_1 = 'YN756'
AND type_cd = 'P'
AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12'
AND (lname LIKE (v_LnameUpper || '%') OR
lname LIKE (v_searchLnameLower || '%'))
AND (e_flag = 'Y' OR
it_flag = 'Y' OR
fit_flag = 'Y'));
Update the GTT
UPDATE some_gtt_1 a
SET column_a = (SELECT b.data_a FROM some_table_b b
WHERE a.column_b = b.data_b AND a.column_c = 'C')
WHERE column_a IS NULL OR column_a = ' ';
and later on get the data out of the GTT. These are just sample queries, in actuality the queries are really complext with lot of joins and subqueries.
I have a three part question:
Can someone show how to transform
the above sample queries to
collections and/or cursors?
Since
with GTT's you can work natively
with SQL...why go away from the
GTTs? are they really that bad.
What should be the guidelines on
When to use and When to avoid GTT's
Let's answer the second question first:
"why go away from the GTTs? are they
really that bad."
A couple of days ago I was knocking up a proof of concept which loaded a largish XML file (~18MB) into an XMLType. Because I didn't want to store the XMLType permanently I tried loading it into a PL/SQL variable (session memory) and a temporary table. Loading it into a temporary table took five times as long as loading it into an XMLType variable (5 seconds compared to 1 second). The difference is because temporary tables are not memory structures: they are written to disk (specifically your nominated temporary tablespace).
If you want to cache a lot of data then storing it in memory will stress the PGA, which is not good if you have lots of sessions. So it's a trade-off between RAM and time.
To the first question:
"Can someone show how to transform the
above sample queries to collections
and/or cursors?"
The queries you post can be merged into a single statement:
SELECT case when a.column_a IS NULL OR a.column_a = ' '
then b.data_a
else column_a end AS someA,
a.someB,
a.someC
FROM TABLE_A a
left outer join TABLE_B b
on ( a.column_b = b.data_b AND a.column_c = 'C' )
WHERE condition_1 = 'YN756'
AND type_cd = 'P'
AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12'
AND (lname LIKE (v_LnameUpper || '%') OR
lname LIKE (v_searchLnameLower || '%'))
AND (e_flag = 'Y' OR
it_flag = 'Y' OR
fit_flag = 'Y'));
(I have simply transposed your logic but that case() statement could be replaced with a neater nvl2(trim(a.column_a), a.column_a, b.data_a) ).
I know you say your queries are more complicated but your first port of call should be to consider rewriting them. I know how seductive it is to break a gnarly query into lots of baby SQLs stitched together with PL/SQL but pure SQL is way more efficient.
To use a collection it is best to define the types in SQL, because it gives us the flexibility to use them in SQL statements as well as PL/SQL.
create or replace type tab_a_row as object
(col_a number
, col_b varchar2(23)
, col_c date);
/
create or replace type tab_a_nt as table of tab_a_row;
/
Here's a sample function, which returns a result set:
create or replace function get_table_a
(p_arg in number)
return sys_refcursor
is
tab_a_recs tab_a_nt;
rv sys_refcursor;
begin
select tab_a_row(col_a, col_b, col_c)
bulk collect into tab_a_recs
from table_a
where col_a = p_arg;
for i in tab_a_recs.first()..tab_a_recs.last()
loop
if tab_a_recs(i).col_b is null
then
tab_a_recs(i).col_b := 'something';
end if;
end loop;
open rv for select * from table(tab_a_recs);
return rv;
end;
/
And here it is in action:
SQL> select * from table_a
2 /
COL_A COL_B COL_C
---------- ----------------------- ---------
1 whatever 13-JUN-10
1 12-JUN-10
SQL> var rc refcursor
SQL> exec :rc := get_table_a(1)
PL/SQL procedure successfully completed.
SQL> print rc
COL_A COL_B COL_C
---------- ----------------------- ---------
1 whatever 13-JUN-10
1 something 12-JUN-10
SQL>
In the function it is necessary to instantiate the type with the columns, in order to avoid the ORA-00947 exception. This is not necessary when populating a PL/SQL table type:
SQL> create or replace procedure pop_table_a
2 (p_arg in number)
3 is
4 type table_a_nt is table of table_a%rowtype;
5 tab_a_recs table_a_nt;
6 begin
7 select *
8 bulk collect into tab_a_recs
9 from table_a
10 where col_a = p_arg;
11 end;
12 /
Procedure created.
SQL>
Finally, guidelines
"What should be the guidelines on When
to use and When to avoid GTT's"
Global temp tables are very good when we need share cached data between different program units in the same session. For instance if we have a generic report structure generated by a single function feeding off a GTT which is populated by one of several procedures. (Although even that could also be implemented with dynamic ref cursors ...)
Global temporary tables are also good if we have a lot of intermediate processing which is just too complicated to be solved with a single SQL query. Especially if that processing must be applied to subsets of the retrieved rows.
But in general the presumption should be that we don't need to use a temporary table. So
Do it in SQL unless it is too hard it which case ...
... Do it in PL/SQL variables (usually collections) unless it takes too much memory it which case ...
... Do it with a Global Temporary Table
Generally I'd use a PL/SQL collection for storing small volumes of data (maybe a thousand rows). If the data volumes were much larger, I'd use a GTT so that they don't overload the process memory.
So I might select a few hundred rows from the database into a PL/SQL collection, then loop through them to do some calculation/delete a few or whatever, then insert that collection into another table.
If I was dealing with hundreds of thousands of rows, I would try to push as much of the 'heavy lifting' processing into large SQL statements. That may or may not require GTT.
You can use SQL level collection objects as something that translates quite easily between SQL and PL/SQL
create type typ_car is object (make varchar2(10), model varchar2(20), year number(4));
/
create type typ_coll_car is table of typ_car;
/
select * from table (typ_coll_car(typ_car('a','b',1999), typ_car('A','Z',2000)));
MAKE MODEL YEAR
---------- -------------------- ---------------
a b 1,999.00
A Z 2,000.00
declare
v_car1 typ_car := typ_car('a','b',1999);
v_car2 typ_car := typ_car('A','Z',2000);
t_car typ_coll_car := typ_coll_car();
begin
t_car := typ_coll_car(v_car1, v_car2);
FOR i in (SELECT * from table(t_car)) LOOP
dbms_output.put_line(i.year);
END LOOP;
end;
/

Resources