Related
I have a program that has been working for years. Today, we upgraded from SAS 9.4M3 to 9.4M7.
proc setinit
Current version: 9.04.01M7P080520
Since then, I am not able to get the same results as before the upgrade.
Please note that I am querying on Oracle databases directly.
Trying to replicate the issue with a minimal, reproducible SAS table example, I found that the issue disappear when querying on a SAS table instead of on Oracle databases.
Let's say I have the following dataset:
data have;
infile datalines delimiter="|";
input name :$8. id $1. value :$8. t1 :$10.;
datalines;
Joe|A|TLO
Joe|B|IKSK
Joe|C|Yes
;
Using the temporary table:
proc sql;
create table want as
select name,
min(case when id = "A" then value else "" end) as A length 8
from have
group by name;
quit;
Results:
name A
Joe TLO
However, when running the very same query on the oracle database directly I get a missing value instead:
proc sql;
create table want as
select name,
min(case when id = "A" then value else "" end) as A length 8
from have_oracle
group by name;
quit;
name A
Joe
As per documentation, the min() function is behaving properly when used on the SAS table
The MIN function returns a missing value (.) only if all arguments are missing.
I believe this happens when Oracle don't understand the function that SAS is passing it - the min functions in SAS and Oracle are very different and the equivalent in SAS would be LEAST().
So my guess is that the upgrade messed up how is translates the SAS min function to Oracle, but it remains a guess. Does anyone ran into this type of behavior?
EDIT: #Richard's comment
options sastrace=',,,d' sastraceloc=saslog nostsuffix;
proc sql;
create table want as
select t1.name,
min(case when id = 'A' then value else "" end) as A length 8
from oracle_db.names t1 inner join oracle_db.ids t2 on (t1.tid = t2.tid)
group by t1.name;
ORACLE_26: Prepared: on connection 0
SELECT * FROM NAMES
ORACLE_27: Prepared: on connection 1
SELECT UI.INDEX_NAME, UIC.COLUMN_NAME FROM USER_INDEXES UI,USER_IND_COLUMNS UIC WHERE UI.TABLE_NAME='NAMES' AND
UIC.TABLE_NAME='NAMES' AND UI.INDEX_NAME=UIC.INDEX_NAME
ORACLE_28: Executed: on connection 1
SELECT statement ORACLE_27
ORACLE_29: Prepared: on connection 0
SELECT * FROM IDS
ORACLE_30: Prepared: on connection 1
SELECT UI.INDEX_NAME, UIC.COLUMN_NAME FROM USER_INDEXES UI,USER_IND_COLUMNS UIC WHERE UI.TABLE_NAME='IDS' AND
UIC.TABLE_NAME='IDS' AND UI.INDEX_NAME=UIC.INDEX_NAME
ORACLE_31: Executed: on connection 1
SELECT statement ORACLE_30
ORACLE_32: Prepared: on connection 0
select t1."NAME", MIN(case when t2."ID" = 'A' then t1."VALUE" else ' ' end) as A from
NAMES t1 inner join IDS t2 on t1."TID" = t2."TID" group by t1."NAME"
ORACLE_33: Executed: on connection 0
SELECT statement ORACLE_32
ACCESS ENGINE: SQL statement was passed to the DBMS for fetching data.
NOTE: Table WORK.SELECTED_ATTR created, with 1 row and 2 columns.
! quit;
NOTE: PROCEDURE SQL used (Total process time):
real time 0.34 seconds
cpu time 0.09 seconds
Use the SASTRACE= system option to log SQL statements sent to the DBMS.
options SASTRACE=',,,d';
will provide the most detailed logging.
From the prepared statement you can see why you are getting a blank from the Oracle query.
select
t1."NAME"
, MIN ( case
when t2."ID" = 'A' then t1."VALUE"
else ' '
end
) as A
from
NAMES t1 inner join IDS t2 on t1."TID" = t2."TID"
group by
t1."NAME"
The SQL MIN () aggregate function will exclude null values from consideration.
In SAS SQL, a blank value is also interpreted as null.
In SAS your SQL query returns the min non-null value TLO
In Oracle transformed query, the SAS blank '' is transformed to ' ' a single blank character, which is not-null, and thus ' ' < 'TLO' and you get the blank result.
The actual MIN you want to force in Oracle is min(case when id = "A" then value else null end) which #Tom has shown is possible by omitting the else clause.
The only way to see the actual difference is to run the query with trace in the prior SAS version, or if lucky, see the explanation in the (ignored by many) "What's New" documents.
Why are you using ' ' or '' as the ELSE value? Perhaps Oracle is treating a string with blanks in it differently than a null string.
Why not use null in the ELSE clause?
or just leave off the ELSE clause and let it default to null?
libname mylib oracle .... ;
proc sql;
create table want as
select name
, min(case when id = "A" then value else null end) as A length 8
from mylib.have_oracle
group by name
;
quit;
Also try running the Oracle code yourself, instead of using implicit pass thru.
proc sql;
connect to oracle ..... ;
create table want as
select * from connection to oracle
(
select name,
min(case when id = "A" then value else null end) as A length 8
from have_oracle
group by name
)
;
quit;
When I try to reproduce this in Oracle I get the result you are looking for so I suspect it has something to do with SAS (which I'm not familiar with).
with t as (
select 'Joe' name, 'A' id, 'TLO' value from dual union all
select 'Joe' name, 'B' id, 'IKSK' value from dual union all
select 'Joe' name, 'C' id, 'Yes' value from dual
)
select name
, min(case when id = 'A' then value else '' end) as a
from t
group by name;
NAME A
---- ----
Joe TLO
Unrelated, if you are only interested in id = 'A' then a better query would be:
select name
, min(value) as a
from t
where id = 'A'
group by name;
I have created a pipelined function which returns a table. I use this function like a dynamic view in another function, in a with clause, to mark certain records. I then use the results from this query in an aggregate query, based on various criteria. What I want to do is union all these aggregations together (as they all use the same source data, but show aggregations at different heirarchical levels).
When I produce the data for individual levels, it works fine. However, when I try to combine them, I get an ORA-12840 error: cannot access a remote table after parallel/insert direct load txn.
(I should note that my function and queries are looking at tables on a remote server, via a DB link).
Any ideas what's going on here?
Here's an idea of the code:
function getMatches(criteria in varchar2) return myTableType pipelined;
...where this function basically executes some dynamic SQL, which references remote tables, as a reference cursor and spits out the results.
Then the factored queries go something like:
with marked as (
select id from table(getMatches('OK'))
),
fullStats as (
select mainTable.id,
avg(nvl2(marked.id, 1, 0)) isMarked,
sum(mainTable.val) total
from mainTable
left join marked
on marked.id = mainTable.id
group by mainTable.id
)
The reason for the first factor is speed -- if I inline it, in the join, the query goes really slowly -- but either way, it doesn't alter the status of whatever's causing the exception.
Then, say for a complete overview, I would do:
select sum(total) grandTotal
from fullStats
...or for an overview by isMarked:
select sum(total) grandTotal
from fullStats
where isMarked = 1
These work fine individually (my pseudocode maybe wrong or overly simplistic, but you get the idea), but as soon as I union all them together, I get the ORA-12840 error :(
EDIT By request, here is an obfuscated version of my function:
function getMatches(
search in varchar2)
return idTable pipelined
as
idRegex varchar2(20) := '(05|10|20|32)\d{3}';
searchSQL varchar2(32767);
type rc is ref cursor;
cCluster rc;
rCluster idTrinity;
BAD_CLUSTER exception;
begin
if regexp_like(search, '^L\d{3}$') then
searchSQL := 'select distinct null id1, id2_link id2, id3_link id3 from anotherSchema.linkTable#my.remote.link where id2 = ''' || search || '''';
elsif regexp_like(search, '^' || idRegex || '(,' || idRegex || || ')*$') then
searchSQL := 'select distinct null id1, id2, id3 from anotherSchema.idTable#my.remote.link where id2 in (' || regexp_replace(search, '(\d{5})', '''\1''') || ')';
else
raise BAD_CLUSTER;
end if;
open cCluster for searchSQL;
loop
fetch cCluster into rCluster;
exit when cCluster%NOTFOUND;
pipe row(rCluster);
end loop;
close cCluster;
return;
exception
when BAD_CLUSTER then
raise_application_error(-20000, 'Invalid Cluster Search');
return;
when others then
raise_application_error(-20999, 'API' || sqlcode || chr(10) || sqlerrm);
return;
end getMatches;
It's very simple, designed for an API with limited access to the database, in terms of sophistication (hence passing a comma delimited string as a possible valid argument): If you supply a grouping code, it returns linked IDs (it's a composite, 3-field key); however, if you supply a custom list of codes, it just returns those instead.
I'm on Oracle 10gR2; not sure which version exactly, but I can look it up when I'm back in the office :P
To be honest no idea where the issue came from but the simplest way to solve it - create a temporary table and populate it by values from your pipelined function and use the table inside WITH clause. Surely the temp table should be created but I'm pretty sure you get serious performance shift because dynamic sampling isn't applied to pipelined functions without tricks.
p.s. the issue could be fixed by with marked as ( select /*+ INLINE / id from table(getMatches('OK'))) but surely it isn't the stuff you're looking for so my suggestion is confirmed WITH does something like 'insert /+ APPEND*/' inside it'.
There are plenty of stored procedure on 10g platform . ( Almost 500 SPs)
Each SP might have loop, fetch and etc.
I'd like to ask you if there is a cool method to control all the SPs which are currently running on 10g, and guarantee that it works on 11 g.
I have a development server 1 which is 10 g and the other development server is 11 g.
I can use both of them to testify the propose above.
For instance I know that on 10 g if you use loop, and during the loop the update statements do not affect the loop data but 11g.
There might be more cases that I have to consider. Please tell me if you have any brillant idea , otherwise I will check them up one by one manually and it is a lot of time and human control might be weak sometimes.
important note: It is said that if you select some data from a table or tables, and if you use it in a loop, then during de loop, if you update and commit between loop case, it affects the selected data in cursor.(#11g) But this did not happen #10g version. Please correct me if you heard something like that.
The Example Case;
CREATE TABLE vty_musteri(
musterino NUMBER NOT NULL,
subeadi VARCHAR2(61),
kayitzamani VARCHAR2(20)
);
INSERT INTO vty_musteri (musterino, subeadi, kayitzamani )
VALUES (12345, 'AMSTERDAM', '05/30/2012 15:11:13');
COMMIT;
CREATE UNIQUE INDEX vty_musteri_idx ON vty_musteri (musterino);
SELECT * FROM vty_musteri;
CREATE OR REPLACE PROCEDURE krd_upd_silseomusteri_sp(RC1 in out SYS_REFCURSOR) AS
v_musterino NUMBER := 12345;
BEGIN
OPEN RC1 FOR
SELECT m.musterino, m.subeadi, m.kayitzamani
FROM vty_musteri m
WHERE m.musterino = v_musterino;
update vty_musteri
set subeadi = 'PORTO',
kayitzamani = (SELECT TO_CHAR(SYSDATE, 'MM/DD/YYYY HH24:MI:SS')
FROM dual)
where musterino = v_musterino;
COMMIT;
After all run this test on PLSQL:
DECLARE
--test
vRecTip SYS_REFCURSOR;
TYPE vRecTipK IS RECORD(
musterino NUMBER,
subeadi VARCHAR2(61),
kayitzamani VARCHAR2(20)
);
v_SeoTip vRecTipK;
BEGIN
krd_upd_silseomusteri_sp(rc1 => vRecTip);
IF vRecTip%ISOPEN THEN
LOOP
FETCH vRecTip
INTO v_SeoTip;
EXIT WHEN vRecTip%NOTFOUND;
dbms_output.put_line('The Value : ' || v_SeoTip.musterino || ' - ' || v_SeoTip.subeadi || ' - ' || v_SeoTip.kayitzamani);
END LOOP;
END IF;
COMMIT;
END;
END;
If you run this on 10g you will see AMSTERDAM, but on 11G, it is PORTO.
To fix it; I put a hint in the sp like the following:
SELECT /*+ full(m)*/ m.musterino, m.subeadi, m.kayitzamani
Isn't it weird? any alternative sugesstion to get AMSTERDAM ?
One thing we stumbled upon during a migration were queries that weren't supposed to work on 10.x (but did anyway) did no longer work on 11.x
This happens if you have ambigous column references in your query.
Something like this:
SELECT name,
f.some_col,
b.other_col
FROM foo f,
JOIN bar b ON f.id = b.fid
If the column name exists in both tables, 10.x would run the statement - which was a bug.
This bug (BugID: 6760937) was fixed and makes the statement (rightfully) fail in 11.x
Basic PLSQL structures should work exactly the same. Some pitfalls are listed here:
http://www.help2ora.com/index.php/2011/08/04/be-careful-when-migrating-difference-between-oracle-10g-and-11g/
To fix it; I put a hint in the sp like the following:
SELECT /+ full(m)/ m.musterino, m.subeadi, m.kayitzamani
Recently I have done migration to Oracle 11g. Faced few unprecedented issues. I have written a blog post on this. Have a look http://learncodewrite.blogspot.in/2017/04/migrating-to-oracle-11g-from-oracle-10g.html?m=1.
I am developing an application in Oracle APEX. I have a string with user id's that is comma deliminated which looks like this,
45,4932,20,19
This string is stored as
:P5_USER_ID_LIST
I want a query that will find all users that are within this list my query looks like this
SELECT * FROM users u WHERE u.user_id IN (:P5_USER_ID_LIST);
I keep getting an Oracle error: Invalid number. If I however hard code the string into the query it works. Like this:
SELECT * FROM users u WHERE u.user_id IN (45,4932,20,19);
Anyone know why this might be an issue?
A bind variable binds a value, in this case the string '45,4932,20,19'. You could use dynamic SQL and concatenation as suggested by Randy, but you would need to be very careful that the user is not able to modify this value, otherwise you have a SQL Injection issue.
A safer route would be to put the IDs into an Apex collection in a PL/SQL process:
declare
array apex_application_global.vc_arr2;
begin
array := apex_util.string_to_table (:P5_USER_ID_LIST, ',');
apex_collection.create_or_truncate_collection ('P5_ID_COLL');
apex_collection.add_members ('P5_ID_COLL', array);
end;
Then change your query to:
SELECT * FROM users u WHERE u.user_id IN
(SELECT c001 FROM apex_collections
WHERE collection_name = 'P5_ID_COLL')
An easier solution is to use instr:
SELECT * FROM users u
WHERE instr(',' || :P5_USER_ID_LIST ||',' ,',' || u.user_id|| ',', 1) !=0;
tricks:
',' || :P5_USER_ID_LIST ||','
to make your string ,45,4932,20,19,
',' || u.user_id|| ','
to have i.e. ,32, and avoid to select the 32 being in ,4932,
I have faced this situation several times and here is what i've used:
SELECT *
FROM users u
WHERE ','||to_char(:P5_USER_ID_LIST)||',' like '%,'||to_char(u.user_id)||',%'
ive used the like operator but you must be a little carefull of one aspect here: your item P5_USER_ID_LIST must be ",45,4932,20,19," so that like will compare with an exact number "',45,'".
When using it like this, the select will not mistake lets say : 5 with 15, 155, 55.
Try it out and let me know how it goes;)
Cheers ,
Alex
Create a native query rather than using "createQuery/createNamedQuery"
The reason this is an issue is that you cannot just bind an in list the way you want, and just about everyone makes this mistake at least once as they are learning Oracle (and probably SQL!).
When you bind the string '32,64,128', it effectively becomes a query like:
select ...
from t
where t.c1 in ('32,64,128')
To Oracle this is totally different to:
select ...
from t
where t.c1 in (32,64,128)
The first example has a single string value in the in list and the second has a 3 numbers in the in list. The reason you get an invalid number error is because Oracle attempts to cast the string '32,64,128' into a number, which it cannot do due to the commas in the string.
A variation of this "how do I bind an in list" question has come up on here quite a few times recently.
Generically, and without resorting to any PLSQL, worrying about SQL Injection or not binding the query correctly, you can use this trick:
with bound_inlist
as
(
select
substr(txt,
instr (txt, ',', 1, level ) + 1,
instr (txt, ',', 1, level+1) - instr (txt, ',', 1, level) -1 )
as token
from (select ','||:txt||',' txt from dual)
connect by level <= length(:txt)-length(replace(:txt,',',''))+1
)
select *
from bound_inlist a, users u
where a.token = u.id;
If possible the best idea may be to not store your user ids in csv! Put them in a table or failing that an array etc. You cannot bind a csv field as a number.
Please dont use: WHERE ','||to_char(:P5_USER_ID_LIST)||',' like '%,'||to_char(u.user_id)||',%' because you'll force a full table scan although with the users table you may not have that many so the impact will be low but against other tables in an enterprise environment this is a problem.
EDIT: I have put together a script to demonstrate the differences between the regex method and the wildcard like method. Not only is regex faster but it's also a lot more robust.
-- Create table
create table CSV_TEST
(
NUM NUMBER not null,
STR VARCHAR2(20)
);
create sequence csv_test_seq;
begin
for j in 1..10 loop
for i in 1..500000 loop
insert into csv_test( num, str ) values ( csv_test_seq.nextval, to_char( csv_test_seq.nextval ));
end loop;
commit;
end loop;
end;
/
-- Create/Recreate primary, unique and foreign key constraints
alter table CSV_TEST
add constraint CSV_TEST_PK primary key (NUM)
using index ;
alter table CSV_TEST
add constraint CSV_TEST_FK unique (STR)
using index;
select sysdate from dual;
select *
from csv_test t
where t.num in ( Select Regexp_Substr('100001, 100002, 100003 , 100004, 100005','[^,]+', 1, Level) From Dual
Connect By Regexp_Substr('100001, 100002,100003, 100004, 100005', '[^,]+', 1, Level) Is Not Null);
select sysdate from dual;
select *
from csv_test t
where ('%,' || '100001,100002, 100003, 100004 ,100005' || ',%') like '%,' || num || ',%';
select sysdate from dual;
select *
from csv_test t
where t.num in ( Select Regexp_Substr('100001, 100002, 100003 , 100004, 100005','[^,]+', 1, Level) From Dual
Connect By Regexp_Substr('100001, 100002,100003, 100004, 100005', '[^,]+', 1, Level) Is Not Null);
select sysdate from dual;
select *
from csv_test t
where ('%,' || '100001,100002, 100003, 100004 ,100005' || ',%') like '%,' || num || ',%';
select sysdate from dual;
drop table csv_test;
drop sequence csv_test_seq;
Solution from Tony Andrews works for me. The process should be added to "Page processing" >> "After submit">> "Processes".
As you are Storing User Ids as String so You can Easily match String Using Like as Below
SELECT * FROM users u WHERE u.user_id LIKE '%'||(:P5_USER_ID_LIST)||'%'
For Example
:P5_USER_ID_LIST = 45,4932,20,19
Your Query Surely Will return Any of 1 User Id which Matches to Users table
This Will Surely Resolve Your Issue , Enjoy
you will need to run this as dynamic SQL.
create the entire string, then run it dynamically.
We just converted our sql server stored procedures to oracle procedures. Sql Server SP's were highly dependent on session tables (INSERT INTO #table1...) these tables got converted as global temporary tables in oracle. We ended up with aroun 500 GTT's for our 400 SP's
Now we are finding out that working with GTT's in oracle is considered a last option because of performance and other issues.
what other alternatives are there? Collections? Cursors?
Our typical use of GTT's is like so:
Insert into GTT
INSERT INTO some_gtt_1
(column_a,
column_b,
column_c)
(SELECT someA,
someB,
someC
FROM TABLE_A
WHERE condition_1 = 'YN756'
AND type_cd = 'P'
AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12'
AND (lname LIKE (v_LnameUpper || '%') OR
lname LIKE (v_searchLnameLower || '%'))
AND (e_flag = 'Y' OR
it_flag = 'Y' OR
fit_flag = 'Y'));
Update the GTT
UPDATE some_gtt_1 a
SET column_a = (SELECT b.data_a FROM some_table_b b
WHERE a.column_b = b.data_b AND a.column_c = 'C')
WHERE column_a IS NULL OR column_a = ' ';
and later on get the data out of the GTT. These are just sample queries, in actuality the queries are really complext with lot of joins and subqueries.
I have a three part question:
Can someone show how to transform
the above sample queries to
collections and/or cursors?
Since
with GTT's you can work natively
with SQL...why go away from the
GTTs? are they really that bad.
What should be the guidelines on
When to use and When to avoid GTT's
Let's answer the second question first:
"why go away from the GTTs? are they
really that bad."
A couple of days ago I was knocking up a proof of concept which loaded a largish XML file (~18MB) into an XMLType. Because I didn't want to store the XMLType permanently I tried loading it into a PL/SQL variable (session memory) and a temporary table. Loading it into a temporary table took five times as long as loading it into an XMLType variable (5 seconds compared to 1 second). The difference is because temporary tables are not memory structures: they are written to disk (specifically your nominated temporary tablespace).
If you want to cache a lot of data then storing it in memory will stress the PGA, which is not good if you have lots of sessions. So it's a trade-off between RAM and time.
To the first question:
"Can someone show how to transform the
above sample queries to collections
and/or cursors?"
The queries you post can be merged into a single statement:
SELECT case when a.column_a IS NULL OR a.column_a = ' '
then b.data_a
else column_a end AS someA,
a.someB,
a.someC
FROM TABLE_A a
left outer join TABLE_B b
on ( a.column_b = b.data_b AND a.column_c = 'C' )
WHERE condition_1 = 'YN756'
AND type_cd = 'P'
AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12'
AND (lname LIKE (v_LnameUpper || '%') OR
lname LIKE (v_searchLnameLower || '%'))
AND (e_flag = 'Y' OR
it_flag = 'Y' OR
fit_flag = 'Y'));
(I have simply transposed your logic but that case() statement could be replaced with a neater nvl2(trim(a.column_a), a.column_a, b.data_a) ).
I know you say your queries are more complicated but your first port of call should be to consider rewriting them. I know how seductive it is to break a gnarly query into lots of baby SQLs stitched together with PL/SQL but pure SQL is way more efficient.
To use a collection it is best to define the types in SQL, because it gives us the flexibility to use them in SQL statements as well as PL/SQL.
create or replace type tab_a_row as object
(col_a number
, col_b varchar2(23)
, col_c date);
/
create or replace type tab_a_nt as table of tab_a_row;
/
Here's a sample function, which returns a result set:
create or replace function get_table_a
(p_arg in number)
return sys_refcursor
is
tab_a_recs tab_a_nt;
rv sys_refcursor;
begin
select tab_a_row(col_a, col_b, col_c)
bulk collect into tab_a_recs
from table_a
where col_a = p_arg;
for i in tab_a_recs.first()..tab_a_recs.last()
loop
if tab_a_recs(i).col_b is null
then
tab_a_recs(i).col_b := 'something';
end if;
end loop;
open rv for select * from table(tab_a_recs);
return rv;
end;
/
And here it is in action:
SQL> select * from table_a
2 /
COL_A COL_B COL_C
---------- ----------------------- ---------
1 whatever 13-JUN-10
1 12-JUN-10
SQL> var rc refcursor
SQL> exec :rc := get_table_a(1)
PL/SQL procedure successfully completed.
SQL> print rc
COL_A COL_B COL_C
---------- ----------------------- ---------
1 whatever 13-JUN-10
1 something 12-JUN-10
SQL>
In the function it is necessary to instantiate the type with the columns, in order to avoid the ORA-00947 exception. This is not necessary when populating a PL/SQL table type:
SQL> create or replace procedure pop_table_a
2 (p_arg in number)
3 is
4 type table_a_nt is table of table_a%rowtype;
5 tab_a_recs table_a_nt;
6 begin
7 select *
8 bulk collect into tab_a_recs
9 from table_a
10 where col_a = p_arg;
11 end;
12 /
Procedure created.
SQL>
Finally, guidelines
"What should be the guidelines on When
to use and When to avoid GTT's"
Global temp tables are very good when we need share cached data between different program units in the same session. For instance if we have a generic report structure generated by a single function feeding off a GTT which is populated by one of several procedures. (Although even that could also be implemented with dynamic ref cursors ...)
Global temporary tables are also good if we have a lot of intermediate processing which is just too complicated to be solved with a single SQL query. Especially if that processing must be applied to subsets of the retrieved rows.
But in general the presumption should be that we don't need to use a temporary table. So
Do it in SQL unless it is too hard it which case ...
... Do it in PL/SQL variables (usually collections) unless it takes too much memory it which case ...
... Do it with a Global Temporary Table
Generally I'd use a PL/SQL collection for storing small volumes of data (maybe a thousand rows). If the data volumes were much larger, I'd use a GTT so that they don't overload the process memory.
So I might select a few hundred rows from the database into a PL/SQL collection, then loop through them to do some calculation/delete a few or whatever, then insert that collection into another table.
If I was dealing with hundreds of thousands of rows, I would try to push as much of the 'heavy lifting' processing into large SQL statements. That may or may not require GTT.
You can use SQL level collection objects as something that translates quite easily between SQL and PL/SQL
create type typ_car is object (make varchar2(10), model varchar2(20), year number(4));
/
create type typ_coll_car is table of typ_car;
/
select * from table (typ_coll_car(typ_car('a','b',1999), typ_car('A','Z',2000)));
MAKE MODEL YEAR
---------- -------------------- ---------------
a b 1,999.00
A Z 2,000.00
declare
v_car1 typ_car := typ_car('a','b',1999);
v_car2 typ_car := typ_car('A','Z',2000);
t_car typ_coll_car := typ_coll_car();
begin
t_car := typ_coll_car(v_car1, v_car2);
FOR i in (SELECT * from table(t_car)) LOOP
dbms_output.put_line(i.year);
END LOOP;
end;
/