Oracle how to address column in PL/SQL FOR LOOP by it's position instead of column name - oracle

Is there a way to address column by it's position in PL/SQL, something like this
BEGIN
FOR r IN (SELECT * FROM table)
LOOP
DBMS_OUTPUT.PUT_LINE(r.(1));
END LOOP;
END;
I tried both bracket types but always get
PLS-00103: Encountered the symbol "(" when expecting one of the following
I also tried BULK COLECT INTO with table of varray but then I get
PLS-00642: local collection types not allowed in SQL statements
And if I declare table with %ROWTYPE and bulk it then it seems to behave the same way as the normal loop.
I tried to look into documentation but couldn't find any example of this scenario.

Using the DBMS_SQL package and the COLUMN_VALUE procedure: but that's a lot of work... probably overkilling for your actual needs: you have to open a sys_refcursor for the query, convert it into a cursor number with DBMS_SQL.TO_CURSOR_NUMBER, collect the descriptions of the columns with DBMS_SQL.DESCRIBE_COLUMNS and loop on the column count and fetch the value with DBMS_SQL.COLUMN_VALUE into a specific variable according to its type...

In general, no. Oracle does not support accessing records by position; only by identifier.
You can approach the problem slightly differently and, instead of using SELECT *, you can explicitly list the columns to select and give them numeric column aliases:
BEGIN
FOR r IN (SELECT dummy AS "1" FROM DUAL)
LOOP
DBMS_OUTPUT.PUT_LINE(r."1");
END LOOP;
END;
/
You are still referring to the columns by name; only now their names are the "numeric" aliases.
Note: Personally, I would stick to using the column names and not try to find work-arounds to use positions as positions can change if columns are deleted from the table.

If you want to have a matrix in PL/SQL you could define one as an indexed table of an indexed table of whatever data type you're dealing with, something like:
type t_row is table of varchar2(1) index by pls_integer;
type t_tab is table of t_row index by pls_integer;
l_matrix t_tab;
The tricky part is populating it. DBMS_SQL is certainly one approach, but partly for fun, you could use XML. The DBMS_XMLGEN package lets you run a query and get the results back as an XML document with a ROWSET root node. You can then count the ROW nodes under that, and assuming there are any, count the nodes within the first ROW - which gives the matrix dimensions.
You can then use nested loops to populate the matrix, and once done, you can refer to an element as for example l_matrix(3)(2).
This is a working example, where the table being queried has values that are all a single character, for simplicity; you would need to change the matrix definition to handle the data you expect.
DECLARE
-- elements of matrix need to be data type and size suitable for your data
type t_row is table of varchar2(1) index by pls_integer;
type t_tab is table of t_row index by pls_integer;
l_matrix t_tab;
l_xml xmltype;
l_rows pls_integer;
l_cols pls_integer;
BEGIN
l_xml := dbms_xmlgen.getxmltype('SELECT * FROM your_table');
SELECT XMLQuery('count(/ROWSET/ROW)'
PASSING l_xml
RETURNING CONTENT).getnumberval()
INTO l_rows
FROM DUAL;
IF l_rows = 0 THEN
dbms_output.put_line('No data');
RETURN;
END IF;
SELECT XMLQuery('count(/ROWSET/ROW[1]/*)'
PASSING l_xml
RETURNING CONTENT).getnumberval()
INTO l_cols
FROM DUAL;
dbms_output.put_line('Rows: ' || l_rows || ' Cols: ' || l_cols);
-- populate matrix
FOR i IN 1..l_rows LOOP
FOR j in 1..l_cols LOOP
l_matrix(i)(j) := l_xml.extract('/ROWSET/ROW[' || i || ']/*[' || j || ']/text()').getstringval();
END LOOP;
END LOOP;
-- refer to a specific element
dbms_output.put_line('Element 3,2 should be H; is actually: ' || l_matrix(3)(2));
-- display all elements as list
FOR i IN 1..l_matrix.COUNT LOOP
FOR j IN 1..l_matrix(i).COUNT LOOP
dbms_output.put_line('Row ' || i || ' col ' || j || ': ' || l_matrix(i)(j));
END LOOP;
END LOOP;
-- display all elements as grid
FOR i IN 1..l_matrix.COUNT LOOP
FOR j IN 1..l_matrix(i).COUNT LOOP
IF j > 0 THEN
dbms_output.put(' ');
END IF;
-- format/pad real data to align it; simple here with single-char values
dbms_output.put(l_matrix(i)(j));
END LOOP;
dbms_output.new_line;
END LOOP;
END;
/
With my sample table that outputs:
Rows: 4 Cols: 3
Element 3,2 should be H; is actually: H
Row 1 col 1: A
Row 1 col 2: B
Row 1 col 3: C
Row 2 col 1: D
Row 2 col 2: E
Row 2 col 3: F
Row 3 col 1: G
Row 3 col 2: H
Row 3 col 3: I
Row 4 col 1: J
Row 4 col 2: K
Row 4 col 3: L
A B C
D E F
G H I
J K L
fiddle
As #astenx said in a comment, you can use JSON instead of XML; which would allow you to populate the matrix like this.
That has raised something else I'd overlooked. My dummy table doesn't have an ID column or an obvious way of ordering the results. For the JSON version I used rownum, but in both that and the original query the order of the rows (in the result set, and thus in the matrix) is currently indeterminate. You need to have a column you can order by in the query, or some other way to determine the sequence of rows.

Related

Oracle: Update Every Row in a Table based off an Array

So i'm trying to create some seed data for a database that uses zip codes. I've created an array of 22 arbitrary zip code strings, and i'm trying to loop through the array and update one of the zips to every row in a table. Based on what I read and tried (I'm a 1st year, so I'm probably missing something), this should work, and does when I just output the array value based on the count of the table. this issue is in the row id subquery. When I run it in my console, it doesn't throw any errors, but it never completes and I think it's stuck in an infinite loop. How can I adjust this so that it will update the field and not get stuck?
declare
t_count NUMBER;
TYPE zips IS VARRAY(22) OF CHAR(5);
set_of_zips zips;
i NUMBER;
j NUMBER :=1;
BEGIN
SELECT count(*) INTO t_count FROM T_DATA;
set_of_zips:= zips('72550', '71601', '85920', '85135', '95451', '90021', '99611', '99928', '35213', '60475', '80451', '80023', '59330', '62226', '27127', '28006', '66515', '27620', '66527', '15438', '32601', '00000');
FOR i IN 1 .. t_count LOOP
UPDATE T_DATA
SET T_ZIP=set_of_zips(j)
---
WHERE rowid IN (
SELECT ri FROM (
SELECT rowid AS ri
FROM T_DATA
ORDER BY T_ZIP
)
) = i;
---
j := j + 1;
IF j > 22
THEN
j := 1;
END IF;
END LOOP;
COMMIT;
end;
You don't need PL/SQL for this.
UPDATE t_data
SET t_zip = DECODE(MOD(ROWNUM,22)+1,
1,'72550',
2,'71601',
3,'85920',
4,'85135',
5,'95451',
6,'90021',
7,'99611',
8,'99928',
9,'35213',
10,'60475',
11,'80451',
12,'80023',
13,'59330',
14,'62226',
15,'27127',
16,'28006',
17,'66515',
18,'27620',
19,'66527',
20,'15438',
21,'32601',
22,'00000')

PL/SQL : Need to compare data for every field in a table in plsql

I need to create a procedure which will take collection as an input and compare the data with staging table data row by row for every field (approx 50 columns).
Business logic :
whenever a staging table column value will mismatch with the corresponding collection variable value then i need to update 'FAIL' into staging table STATUS column and reason into REASON column for that row.
If matched then need to update 'SUCCESS' in STATUS column.
Payload will be approx 500 rows in each call.
I have created below sample script:
PKG Specification :
CREATE OR REPLACE
PACKAGE process_data
IS
TYPE pass_data_rec
IS
record
(
p_eid employee.eid%type,
p_ename employee.ename%type,
p_salary employee.salary%type,
p_dept employee.dept%type
);
type p_data_tab IS TABLE OF pass_data_rec INDEX BY binary_integer;
PROCEDURE comp_data(inpt_data IN p_data_tab);
END;
PKG Body:
CREATE OR REPLACE
PACKAGE body process_data
IS
PROCEDURE comp_data (inpt_data IN p_data_tab)
IS
status VARCHAR2(10);
reason VARCHAR2(1000);
cnt1 NUMBER;
v_eid employee_copy.eid%type;
v_ename employee_copy.ename%type;
BEGIN
FOR i IN 1..inpt_data.count
LOOP
SELECT ec1.eid,ec1.ename,COUNT(*) over () INTO v_eid,v_ename,cnt1
FROM employee_copy ec1
WHERE ec1.eid = inpt_data(i).p_eid;
IF cnt1 > 0 THEN
IF (v_eid=inpt_data(i).p_eid AND v_ename = inpt_data(i).p_ename) THEN
UPDATE employee_copy SET status = 'SUCCESS' WHERE eid = inpt_data(i).p_eid;
ELSE
UPDATE employee_copy SET status = 'FAIL' WHERE eid = inpt_data(i).p_eid;
END IF;
ELSE
NULL;
END IF;
END LOOP;
COMMIT;
status :='success';
EXCEPTION
WHEN OTHERS THEN
status:= 'fail';
--reason:=sqlerrm;
END;
END;
But in this approach i have below mentioned issues.
Need to declare all local variables for each column value.
Need to compare all variable data using 'and' operator. Not sure whether it is correct way or not because if there are 50 columns then if condition will become very heavy.
IF (v_eid=inpt_data(i).p_eid AND v_ename = inpt_data(i).p_ename) THEN
Need to update REASON column when any column data mismatched (first mismatched column name) for that row, in this approach i am not able to achieve.
Please suggest any other good way to achieve this requirement.
Edit :
There is only one table at my end i.e target table. Input will come from any other source as collection object.
REVISED Answer
You could load the the records into t temp table, but unless you want additional processing it's not necessary. AFAIK there is no way to identify the offending column (first one only) without slugging through column-by-column. However, your other concern having to declare a variable is not necessary. You can declare a single variable defined as %rowtype which gives you access to each column by name.
Looping through an array of data to find the occasional error is just bad (imho) with SQL available to eliminate the good ones in one fell swoop. And it's available here. Even though your input is a array we can use as a table by using the TABLE operator, which allows an array (collection) as though it were a database table. So the MINUS operator can till be employed. The following routine will set the appropriate status and identify the first miss matched column for each entry in the input array. It reverts to your original definition in package spec, but replaces the comp_data procedure.
create or replace package body process_data
is
procedure comp_data (inpt_data in p_data_tab)
is
-- define local array to hold status and reason for ecah.
type status_reason_r is record
( eid employee_copy.eid%type
, status employee_copy.status%type
, reason employee_copy.reason%type
);
type status_reason_t is
table of status_reason_r
index by pls_integer;
status_reason status_reason_t := status_reason_t();
-- define error array to contain the eid for each that have a mismatched column
type error_eids_t is table of employee_copy.eid%type ;
error_eids error_eids_t;
current_matched_indx pls_integer;
/*
Helper function to identify 1st mismatched column in error row.
Here is where we slug our way through each column to find the first column
value mismatch. Note: There is actually validate the column sequence, but
for purpose here we'll proceed in the input data type definition.
*/
function identify_mismatch_column(matched_indx_in pls_integer)
return varchar2
is
employee_copy_row employee_copy%rowtype;
mismatched_column employee_copy.reason%type;
begin
select *
into employee_copy_row
from employee_copy
where employee_copy.eid = inpt_data(matched_indx_in).p_eid;
-- now begins the task of finding the mismatched column.
if employee_copy_row.ename != inpt_data(matched_indx_in).p_ename
then
mismatched_column := 'employee_copy.ename';
elsif employee_copy_row.salary != inpt_data(matched_indx_in).p_salary
then
mismatched_column := 'employee_copy.salary';
elsif employee_copy_row.dept != inpt_data(matched_indx_in).p_dept
then
mismatched_column := 'employee_copy.dept';
-- elsif continue until ALL columns tested
end if;
return mismatched_column;
exception
-- NO_DATA_FOUND is the one error that cannot actually be reported in the customer_copy table.
-- It occurs when an eid exista in the input data but does not exist in customer_copy.
when NO_DATA_FOUND
then
dbms_output.put_line( 'Employee (eid)='
|| inpt_data(matched_indx_in).p_eid
|| ' does not exist in employee_copy table.'
);
return 'employee_copy.eid ID is NOT in table';
end identify_mismatch_column;
/*
Helper function to find specified eid in the initial inpt_data array
Since the resulting array of mismatching eid derive from a select without sort
there is no guarantee the index values actually match. Nor can we sort to build
the error array, as there is no way to know the order of eid in the initial array.
The following helper identifies the index value in the input array for the specified
eid in error.
*/
function match_indx(eid_in employee_copy.eid%type)
return pls_integer
is
l_at pls_integer := 1;
l_searching boolean := true;
begin
while l_at <= inpt_data.count
loop
exit when eid_in = inpt_data(l_at).p_eid;
l_at := l_at + 1;
end loop;
if l_at > inpt_data.count
then
raise_application_error( -20199, 'Internal error: Find index for ' || eid_in ||' not found');
end if;
return l_at;
end match_indx;
-- Main
begin
-- initialize status table for each input enter
-- additionally this results is a status_reason table in a 1:1 with the input array.
for i in 1..inpt_data.count
loop
status_reason(i).eid := inpt_data(i).p_eid;
status_reason(i).status :='SUCCESS';
end loop;
/*
We can assume the majority of data in the input array is valid meaning the columns match.
We'll eliminate all value rows by selecting each and then MINUSing those that do match on
each column. To accomplish this cast the input with TABLE function allowing it's use in SQL.
Following produces an array of eids that have at least 1 column mismatch.
*/
select p_eid
bulk collect into error_eids
from (select p_eid, p_ename, p_salary, p_dept from TABLE(inpt_data)
minus
select eid, ename, salary, dept from employee_copy
) exs;
/*
The error_eids array now contains the eid for each miss matched data item.
Mark the status as failed, then begin the long hard process of identifying
the first column causing the mismatch.
The following loop used the nested functions to slug the way through.
This keeps the main line logic clear.
*/
for i in 1 .. error_eids.count -- if all inpt_data rows match then count is 0, we bypass the enttire loop
loop
current_matched_indx := match_indx(error_eids(i));
status_reason(current_matched_indx).status := 'FAIL';
status_reason(current_matched_indx).reason := identify_mismatch_column(current_matched_indx);
end loop;
-- update employee_copy with appropriate status for each row in the input data.
-- Except for any cid that is in the error eid table but doesn't exist in the customer_copy table.
forall i in inpt_data.first .. inpt_data.last
update employee_copy
set status = status_reason(i).status
, reason = status_reason(i).reason
where eid = inpt_data(i).p_eid;
end comp_data;
end process_data;
There are a couple other techniques used you may want to look into if you are not familiar with them:
Nested Functions. There are 2 functions defined and used in the procedure.
Bulk Processing. That is Bulk Collect and Forall.
Good Luck.
ORIGINAL Answer
It is NOT necessary to compare each column nor build a string by concatenating. As you indicated comparing 50 columns becomes pretty heavy. So let the DBMS do most of the lifting. Using the MINUS operator does exactly what you need.
... the MINUS operator, which returns only unique rows returned by the
first query but not by the second.
Using that this task needs only 2 Updates: 1 to mark "fail", and 1 to mark "success". So try:
create table e( e_id integer
, col1 varchar2(20)
, col2 varchar2(20)
);
create table stage ( e_id integer
, col1 varchar2(20)
, col2 varchar2(20)
, status varchar2(20)
, reason varchar2(20)
);
-- create package spec and body
create or replace package process_data
is
procedure comp_data;
end process_data;
create or replace package body process_data
is
package body process_data
procedure comp_data
is
begin
update stage
set status='failed'
, reason='No matching e row'
where e_id in ( select e_id
from (select e_id, col1, col2 from stage
except
select e_id, col1, col2 from e
) exs
);
update stage
set status='success'
where status is null;
end comp_data;
end process_data;
-- test
-- populate tables
insert into e(e_id, col1, col2)
select (1,'ABC','def') from dual union all
select (2,'No','Not any') from dual union all
select (3,'ok', 'best ever') from dual union all
select (4,'xx','zzzzzz') from dual;
insert into stage(e_id, col1, col2)
select (1,'ABC','def') from dual union all
select (2,'No','Not any more') from dual union all
select (4,'yy', 'zzzzzz') from dual union all
select (5,'no e','nnnnn') from dual;
-- run procedure
begin
process_data.comp_date;
end;
-- check results
select * from stage;
Don't ask. Yes, you to must list every column you wish compared in each of the queries involved in the MINUS operation.
I know the documentation link is old (10gR2), but actually finding Oracle documentation is a royal pain. But the MINUS operator still functions the same in 19c;

ROWID in casted pl-sql collection

Is there any rowid like facility in a pl-sql collection? In my case, while I am using this collection in an sql query, I also need the sequence number as they are put in. I know modification is data struecture is a way, but I want to use the index of the collection. so what I am looking for is something like this:
TYPE t_List IS TABLE OF VARCHAR2(200);
and
declare
v_Data t_List := t_List('data 1'
,'data_2'
,'data3');
......
FOR Rec IN (SELECT Column_Value v
,ROWID r
FROM TABLE(CAST(v_data t_list)))
LOOP
Dbms_Output.Put_Line('at ' || Rec.r || ':' || Rec.v);
-- .... and other codes here
END LOOP;
The loop is not expected to be executed in sequence, but I want something built-in like ROWID that is like the index of the collection.
Only schema-level types can be used in SQl statement, even within a PL/SQL block. As you seem to suggest you already know, you can create your own object type that includes the 'sequence' ID:
CREATE TYPE t_object AS OBJECT (
id NUMBER,
data VARCHAR2(200)
)
/
And a collection of that type:
CREATE TYPE t_List IS TABLE OF t_object;
/
And then populate the ID as you build the list:
DECLARE
l_List t_List := t_List(t_object(1, 'data 1')
,t_object(2, 'data_2')
,t_object(3, 'data3'));
BEGIN
FOR Rec IN (SELECT id, data
FROM TABLE(l_list))
LOOP
Dbms_Output.Put_Line('at ' || Rec.id || ':' || Rec.data);
-- .... and other codes here
END LOOP;
END;
/
Without an object type you can use the ROWNUM pseudocolumn:
CREATE TYPE t_List IS TABLE OF VARCHAR2(200);
/
DECLARE
v_Data t_List := t_List('data 1'
,'data_2'
,'data3');
BEGIN
FOR Rec IN (SELECT Column_Value v
,ROWNUM r
FROM TABLE(v_data))
LOOP
Dbms_Output.Put_Line('at ' || Rec.r || ':' || Rec.v);
-- .... and other codes here
END LOOP;
END;
/
anonymous block completed
at 1:data 1
at 2:data_2
at 3:data3
As far as I'm aware that isn't guaranteed to preserve the original creation sequence. I think it almost certainly will at the moment, but perhaps isn't something you should rely on as always being true. (There is no order without an order by, but here you don't have anything you can order by without destroying your initial order...).
If you query a subset of the table - I'm not sure what you mean by "the loop is not expected to be executed in sequence" - you'd need to generate the ROWNUM in a subquery before filtering or it won't be consistent. You'd also need to generate the ROWNUM in a subquery if you're joining this to other, real, tables - I imagine you are, otherwise you could use a PL/SQL collection.
If indexing is important, use Associative Arrays(Also known as Index-by tables)
Refer his.

Is this a table of tables in PL/SQL (If not what is going on with this code?)

Can someone clarify what the PL/SQL code below is doing? It looks as if assets_type is a table of base_Asset. Does that make it a Table of Tables?
I'm having a difficult time visualizing this when it comet to populating the data:
assets(v_ref_key)(dbfields(i).field) := rtrim(replace(strbuf_long2,'''',''''''));
Is this similar to a two dimensional array? Does this mean to populate the field column in the assets (temporary) table with an index of v_ref_key?
PROCEDURE LOAD
IS
TYPE dbfields_rec IS RECORD (field dbfields.field%TYPE,
article_title dbfields.title%TYPE,
image_title dbfields.title%TYPE);
TYPE dbfields_type IS TABLE OF dbfields_rec INDEX BY BINARY_INTEGER;
TYPE base_Asset IS TABLE OF VARCHAR2(4000) INDEX BY VARCHAR2(32);
TYPE assets_type IS TABLE OF asset_type INDEX BY VARCHAR2(4000);
dbfields dbfields_type;
assets assets_type;
v_ref_key assets.ref_key%TYPE;
-- CLIPPED Populate dbfields array code
-- It correctly populates
FOR i IN 1..dbfields.COUNT LOOP
BEGIN
sqlbuf := '(select rtrim(ufld3), ' || dbfields(i).field ||
' as col_label from assetstable ' ||
' where rtrim(ufld3) = ' || '''' || in_id || '''' || ' )';
OPEN assets_cur FOR
sqlbuf;
LOOP
FETCH assets_cur INTO v_ref_key, strbuf_long2;
EXIT WHEN assets_cur%NOTFOUND;
IF (trim(strbuf_long2) is not null and dbfields(i).field is not null) THEN
assets(v_ref_key) (dbfields(i).field)
:= rtrim(replace(strbuf_long2,'''',''''''));
END IF;
END LOOP;
close assets_cur;
END;
END LOOP;
END LOAD;
PL/SQL really only provides one-dimensional arrays - but each element of the array is allowed to be another array if you want to make arrays that act like multi-dimensional ones.
Here is an awfully contrived example to illustrate:
DECLARE
TYPE rows_type IS TABLE OF VARCHAR2(4000) INDEX BY VARCHAR2(4000);
TYPE spreadsheet_type IS TABLE OF row_type INDEX BY VARCHAR2(4000);
spreadsheet spreadsheet_type;
BEGIN
spreadsheet ('row 1') ('column A') := 'XYZ';
END;
The first ('row 1') is the index into the spreadsheet_type array, which would hold all the columns for a particular "row"; and the second ('column A') is the index into the rows_type array.
The "multi-dimensional" aspect of this implementation isn't perfect, however: while you can work with a whole row if you want, e.g.:
my_row rows_type;
...
my_row := spreadsheet ('row 1');
you can't pick out a particular column - there's no way to refer to the set of all elements in the rows for a particular index into rows_type. You'd have to do something like make another type and loop through the first array.

Find specific varchar in Oracle Nested Table

I'm new to PL-SQL, and struggling to find clear documentation of operations are nested tables. Please correct any misused terminology etc.
I have a nested table type that I use as a parameters for a stored procedure.
CREATE OR REPLACE TYPE "STRARRAY" AS TABLE OF VARCHAR2 (255)
In my stored procedure, the table is initialized and populated. Say I have a VARCHAR2 variable, and I want to know true or false if that varchar exists in the nested table.
I tried
strarray.exists('somevarchar')
but I get an ORA-6502
Is there an easier way to do that other than iterating?
FOR i IN strarray.FIRST..strarray.LAST
LOOP
IF strarray(i) = value THEN
return 1;--found
END IF;
END LOOP;
For single value check I prefer the "member" operator.
zep#dev> declare
2 enames strarray;
3 wordToFind varchar2(255) := 'King';
4 begin
5 select emp.last_name bulk collect
6 into enames
7 from employees emp;
8 if wordToFind member of enames then
9 dbms_output.put_line('Found King');
10 end if;
11 end;
12 /
Found King
PL/SQL procedure successfully completed
zep#dev>
You can use the MULTISET INTERSECT operator to determine whether the string you're interested in exists in the collection. For example
declare
l_enames strarray;
l_interesting_enames strarray := new strarray( 'KING' );
begin
select ename
bulk collect into l_enames
from emp;
if( l_interesting_enames = l_interesting_enames MULTISET INTERSECT l_enames )
then
dbms_output.put_line( 'Found King' );
end if;
end;
will print out "Found King" if the string "KING" is an element of the l_enames collection.
You should pass an array index, not an array value to an exists in case you'd like to determine whether this element exists in collection. Nested tables are indexed by integers, so there's no way to reference them by strings.
However, you might want to look at associative arrays instead of collections in case you wish to reference your array element by string index. This will look like this:
DECLARE
TYPE assocArray IS TABLE OF VARCHAR2(100) INDEX BY VARCHAR2(100);
myArray assocArray;
BEGIN
myArray('foo') := 'bar';
IF myArray.exists('baz') THEN
dbms_output.put_line(myArray('baz'));
ELSIF myArray.exists('foo') THEN
dbms_output.put_line(myArray('foo'));
END IF;
END;
Basically, if your array values are distinct, you can create paired arrays referencing each other, like,
arr('b') := 'a'; arr('a') := 'b';
This technique might help you to easily look up any element and its index.
When a nested table is declared as a schema-level type, as you have done, it can be used in any SQL query as a table. So you can write a simple function like so:
CREATE OR REPLACE FUNCTION exists_in( str VARCHAR2, tab stararray)
RETURN BOOLEAN
AS
c INTEGER;
BEGIN
SELECT COUNT(*)
INTO c
FROM TABLE(CAST(tab AS strarray))
WHERE column_value = str;
RETURN (c > 0);
END exists_in;

Resources