Is this safe against SQL injection? - oracle

Is the below form safe against SQL injection? I have to do this because the p_select_statement would be a simple select * from table where lastModifiedTime > :p_asof and <conditions>, but the code that calls getByFilter expects the columns to be in the order col1, col2, col3 while select * may return col2, col3, col1
OPEN CURSOR <dynamic_select_statement> USING <bind variable>
Example...
PROCEDURE getByFilter (
p_select_statement IN VARCHAR2,
p_asof IN TIMESTAMP,
p_cur OUT SYS_REFCURSOR
) AS
full_select_statement VARCHAR2(32767);
BEGIN
full_select_statement := 'SELECT
col1,
col2,
col3
FROM (' || p_select_statement || ')';
OPEN p_cur FOR full_select_statement USING p_asof;
END getByFilter;

No, not if the conditions can be set dynamically:
VARIABLE cur REFCURSOR;
DECLARE
conditions VARCHAR2(4000)
:= '1 = 0 UNION ALL '
|| 'select username AS col1, password AS col2, other_column AS col3 '
|| 'FROM your_secret_password_table';
sql VARCHAR2(4000)
:= 'select * from table where lastModifiedTime > :p_asof and ' || conditions;
BEGIN
PROCEDURE getByFilter (
p_select_statement => sql,
p_asof => SYSTIMESTAMP,
p_cur => :curr
);
END;
/
PRINT cur;

Yes, your example is clearly an example of SQL injection. It counts as SQL injection if your procedure executes some input verbatim as a query (or part of a query, as in this case).
To answer your question about whether unsafe input could do damage, I can think of two possibilities:
The SELECT passed as an argument to the procedure could be designed to be very resource-intensive. For example a query with a large cartesian join:
SELECT * FROM AnyTable
CROSS JOIN AnyTable
CROSS JOIN AnyTable
CROSS JOIN AnyTable
CROSS JOIN AnyTable
ORDER BY 1;
Generates a result set of a few trillion rows, and attempts to sort it. That could be a denial-of-service attack.
The SELECT includes calling a stored function that modifies data.
SELECT SomeFunctionThatDeletes() FROM AnyTable;
Whether your procedure is unsafe depends on whether the calling code allows untrusted input to become the argument to your procedure.
Suppose you had some safeguard like this (written in PHP, but the same applies to any language):
$user_option = $_GET['option'];
$queries = [
1 => 'select * from table where lastModifiedTime > :p_asof and <conditions>',
2 => 'select * from table where lastModifiedTime < :p_asof and <conditions>'
];
$stmt = $db->prepare('call getByFilter(?)');
$stmt->execute([$queries[$user_option]);
Now the user can pick with an input of '1' or '2' which query to run, but they have to pick one or the other of these hard-coded queries. They can't do anything mischievous like change the SQL query that is run.
That's a type of whitelisting. In other words, your application code must limit the choices so that the user's input can pick only safe choices. The user input should never be allowed to be executed as SQL directly without some kind of whitelisting like this.
You could also implement the whitelisting inside your procedure.

Related

Creating SQL-Injection proof dynamic where-clause from collection in PL/SQL

I need to execute a query where the where-clause is generated based on user input. The input consists of 0 or more pairs of varchar2s.
For example:
[('firstname','John')
,('lastname','Smith')
,('street','somestreetname')]
This would translate into:
where (attrib = 'firstname' and value = 'John')
and (attrib = 'lastname' and value = 'Smith')
and (attrib = 'street' and value = 'somestreetname')
This is not the actual data structure as there are several tables but for this example lets keep it simple and say the values are in 1 table. Also I know the parentheses are not necessary in this case but I put them there to make things clear.
What I do now is loop over them and concatinate them to the SQL string. I made a stored proc to generate this where-clause which might also not be very secure since I just concat to the original query.
Something like the following, where I try to get the ID's of the nodes that correspond with the requested parameters:
l_query := select DISTINCT n.id from node n, attribute_values av
where av.node_id = n.id ' || getWhereClause(p_params)
open l_rc
for l_query;
fetch l_rc bulk collect into l_Ids;
close l_rc;
But this is not secure so I'm looking for a way that can guaranty security and prevent SQL-Injection attacks from happening.
Does anyone have any idea on how this is done in a secure way? I would like to use bindings but I don't see how I can do this when you dont know the number of parameters.
DB: v12.1.0.2 (i think)
It's still a bit unclear and generalised, but assuming you have a schema-level collection type, something like:
create type t_attr_value_pair as object (attrib varchar2(30), value varchar2(30))
/
create type t_attr_value_pairs as table of t_attr_value_pair
/
then you can use the attribute/value pairs in the collection for the bind:
declare
l_query varchar2(4000);
l_rc sys_refcursor;
type t_ids is table of number;
l_ids t_ids;
l_attr_value_pairs t_attr_value_pairs;
-- this is as shown in the question; sounds like it isn't exactly how you have it
p_params varchar2(4000) := q'^[('firstname','John')
,('lastname','Smith')
,('street','somestreetname')]^';
begin
-- whatever mechanism you want to get the value pairs into a collection;
-- this is just a quick hack to translate your example string
select t_attr_value_pair(rtrim(ltrim(
regexp_substr(replace(p_params, chr(10)), '(.*?)(,|$)', 1, (2 * level) - 1, null, 1),
'[('''), ''''),
rtrim(ltrim(
regexp_substr(replace(p_params, chr(10)), '(.*?)(,|$)', 1, 2 * level, null, 1),
''''), ''')]'))
bulk collect into l_attr_value_pairs
from dual
connect by level <= regexp_count(p_params, ',') / 2 + 1;
l_query := 'select DISTINCT id from attribute_values
where (attrib, value) in ((select attrib, value from table(:a)))';
open l_rc for l_query using l_attr_value_pairs;
fetch l_rc bulk collect into l_ids;
close l_rc;
for i in 1..l_ids.count loop
dbms_output.put_line('id ' || l_ids(i));
end loop;
end;
/
although it doesn't need to be dynamic with this approach:
...
begin
-- whatever mechamism you want to get the value pairs into a collection
...
select DISTINCT id
bulk collect into l_ids
from attribute_values
where (attrib, value) in ((select attrib, value from table(l_attr_value_pairs)));
for i in 1..l_ids.count loop
dbms_output.put_line('id ' || l_ids(i));
end loop;
end;
/
or with a join to the table collection expression:
select DISTINCT av.id
bulk collect into l_ids
from table(l_attr_value_pairs) t
join attribute_values av on av.attrib = t.attrib and av.value = t.value;
Other collection types will need different approaches.
Alternatively, you could still build up your where clause with one condition per attribute/value pair, while still making them bind variables - but you would need two levels of dynamic SQL, similar to this.

What is the solution for the errors in this PL/SQL code? [duplicate]

The database-schema (Source and target) are very large (each has over 350 tables). I have got the task to somehow merge these two tables into one. The data itself (whats in the tables) has to be migrated. I have to be careful that there are no double entries for primary keys before or while merging the schemata. Has anybody ever done that already and would be able to provide me his solution or could anyone help me get a approach to the task? My approaches all failed and my advisor just tells me to get help online :/
To my approach:
I have tried using the "all_constraints" table to get all pks from my db.
SELECT cols.table_name, cols.column_name, cols.position, cons.status, cons.owner
FROM all_constraints cons, all_cons_columns cols
WHERE cols.owner = 'DB'
AND cons.constraint_type = 'P'
AND cons.constraint_name = cols.constraint_name
AND cons.owner = cols.owner
ORDER BY cols.table_name, cols.position;
I also "know" that there has to be a sequence for the primary keys to add values to it:
CREATE SEQUENCE seq_pk_addition
MINVALUE 1
MAXVALUE 99999999999999999999
START WITH 1
INCREMENT BY 1
CACHE 20;
Because I am a noob if it comes to pl/sql (or sql in general)
So how/what I should do next? :/
Here is a link for an ERD of the database: https://ufile.io/9tdoj
virus scan: https://www.virustotal.com/#/file/dbe5f418115e50313a2268fb33a924cc8cb57a43bc85b3bbf5f6a571b184627e/detection
As promised to help in my comment, i had prepared a dynamic code which you can try to get the data merged with the source and target tables. The logic is as below:
Step1: Get all the table names from the SOURCE schema. In the query below you can you need to replace the schema(owner) name respectively. For testing purpose i had taken only 1 table so when you run it,remove the table name filtering clause.
Step2: Get the constrained columns names for the table. This is used to prepared the ON clause which would be later used for MERGE statement.
Step3: Get the non-constrainted column names for the table. This would be used in UPDATE clause while using MERGE.
Step4: Prepare the insert list when the data doesnot match ON conditon of MERGE statement.
Read my inline comments to understand each step.
CREATE OR REPLACE PROCEDURE COPY_TABLE
AS
Type OBJ_NME is table of varchar2(100) index by pls_integer;
--To hold Table name
v_obj_nm OBJ_NME ;
--To hold Columns of table
v_col_nm OBJ_NME;
v_othr_col_nm OBJ_NME;
on_clause VARCHAR2(2000);
upd_clause VARCHAR2(4000);
cntr number:=0;
v_sql VARCHAR2(4000);
col_list1 VARCHAR2(4000);
col_list2 VARCHAR2(4000);
col_list3 VARCHAR2(4000);
col_list4 varchar2(4000);
col_list5 VARCHAR2(4000);
col_list6 VARCHAR2(4000);
col_list7 VARCHAR2(4000);
col_list8 varchar2(4000);
BEGIN
--Get Source table names
SELECT OBJECT_NAME
BULK COLLECT INTO v_obj_nm
FROM all_objects
WHERE owner LIKE 'RU%' -- Replace `RU%` with your Source schema name here
AND object_type = 'TABLE'
and object_name ='TEST'; --remove this condition if you want this to run for all tables
FOR I IN 1..v_obj_nm.count
loop
--Columns with Constraints
SELECT column_name
bulk collect into v_col_nm
FROM user_cons_columns
WHERE table_name = v_obj_nm(i);
--Columns without Constraints remain columns of table
SELECT *
BULK COLLECT INTO v_othr_col_nm
from (
SELECT column_name
FROM user_tab_cols
WHERE table_name = v_obj_nm(i)
MINUS
SELECT column_name
FROM user_cons_columns
WHERE table_name = v_obj_nm(i));
--Prepare Update Clause
FOR l IN 1..v_othr_col_nm.count
loop
cntr:=cntr+1;
upd_clause := 't1.'||v_othr_col_nm(l)||' = t2.' ||v_othr_col_nm(l);
upd_clause:=upd_clause ||' and ' ;
col_list1:= 't1.'||v_othr_col_nm(l) ||',';
col_list2:= col_list2||col_list1;
col_list5:= 't2.'||v_othr_col_nm(l) ||',';
col_list6:= col_list6||col_list5;
IF (cntr = v_othr_col_nm.count)
THEN
--dbms_output.put_line('YES');
upd_clause:=rtrim(upd_clause,' and');
col_list2:=rtrim( col_list2,',');
col_list6:=rtrim( col_list6,',');
END IF;
dbms_output.put_line(col_list2||col_list6);
--dbms_output.put_line(upd_clause);
End loop;
--Update caluse ends
cntr:=0; --Counter reset
--Prepare ON clause
FOR k IN 1..v_col_nm.count
loop
cntr:=cntr+1;
--dbms_output.put_line(v_col_nm.count || cntr);
on_clause := 't1.'||v_col_nm(k)||' = t2.' ||v_col_nm(k);
on_clause:=on_clause ||' and ' ;
col_list3:= 't1.'||v_col_nm(k) ||',';
col_list4:= col_list4||col_list3;
col_list7:= 't2.'||v_col_nm(k) ||',';
col_list8:= col_list8||col_list7;
IF (cntr = v_col_nm.count)
THEN
--dbms_output.put_line('YES');
on_clause:=rtrim(on_clause,' and');
col_list4:=rtrim( col_list4,',');
col_list8:=rtrim( col_list8,',');
end if;
dbms_output.put_line(col_list4||col_list8);
-- ON clause ends
--Prepare merge Statement
v_sql:= 'MERGE INTO '|| v_obj_nm(i)||' t1--put target schema name before v_obj_nm
USING (SELECT * FROM '|| v_obj_nm(i)||') t2-- put source schema name befire v_obj_nm here
ON ('||on_clause||')
WHEN MATCHED THEN
UPDATE
SET '||upd_clause ||
' WHEN NOT MATCHED THEN
INSERT
('||col_list2||','
||col_list4||
')
VALUES
('||col_list6||','
||col_list8||
')';
dbms_output.put_line(v_sql);
execute immediate v_sql;
end loop;
End loop;
END;
/
Execution:
exec COPY_TABLE
Output:
anonymous block completed
PS: i have tested this with a table with 2 columns out of which i was having unique key constraint .The DDL of table is as below:
At the end i wish you could understand my code(you being a noob) and implement something similar if the above fails for your requirement.
CREATE TABLE TEST
( COL2 NUMBER,
COLUMN1 VARCHAR2(20 BYTE),
CONSTRAINT TEST_UK1 UNIQUE (COLUMN1)
) ;
Oh dear! Normally, such a question would be quickly closed as "too broad", but we need to support victims of evil advisors!
As for the effort, I would need a week full time for an experienced expert plus two days quality checking for an experierenced QA engineer.
First of all, there is no way that such a complex data merge will work on the first try. That means that you'll need test copies of both schemas that can be easily rebuild. And you'll need a place to try it out. Normally this is done with an export of both schemas and an empty dev database.
Next, you need both schemas close enough to be able to compare the data. I'd do it with an import of the export files mentione above. If the schema names are identical than rename one during import.
Next, I'd doublecheck if the structure is really identical, with queries like
SELECT a.owner, a.table_name, b.owner, b.table_name
FROM all_tables a
FULL JOIN all_tables b
ON a.table_name = b.table_name
AND a.owner = 'SCHEMAA'
AND b.owner = 'SCHEMAB'
WHERE a.owner IS NULL or b.owner IS NULL;
Next, I'd check if the primary and unique keys have overlaps:
SELECT id FROM schemaa.table1
INTERSECT
SELECT id FROM schemab.table1;
As there are 300+ tables, I'd generate those queries:
DECLARE
stmt VARCHAR2(30000);
n NUMBER;
schema_a CONSTANT VARCHAR2(128 BYTE) := 'SCHEMAA';
schema_b CONSTANT VARCHAR2(128 BYTE) := 'SCHEMAB';
BEGIN
FOR c IN (SELECT owner, constraint_name, table_name,
(SELECT LISTAGG(column_name,',') WITHIN GROUP (ORDER BY position)
FROM all_cons_columns c
WHERE s.owner = c.owner
AND s.constraint_name = c.constraint_name) AS cols
FROM all_constraints s
WHERE s.constraint_type IN ('P')
AND s.owner = schema_a)
LOOP
dbms_output.put_line('Checking pk '||c.constraint_name||' on table '||c.table_name);
stmt := 'SELECT count(*) FROM '||schema_a||'.'||c.table_name
||' JOIN '||schema_b||'.'||c.table_name
|| ' USING ('||c.cols||')';
--dbms_output.put_line('Query '||stmt);
EXECUTE IMMEDIATE stmt INTO n;
dbms_output.put_line('Found '||n||' overlapping primary keys in table '||c.table_name);
END LOOP;
END;
/
1st of all, for 350 tables, most probably, would need an dynamic SQL.
Declare a CURSOR or a COLLECTION - table of VARCHAR2 with all table names.
Declare a string variable to build the dynamic SQL.
loop through the entire list of the tables name and, for each table generates a string which will be executed as SQL with EXECUTE IMMEDIATE command.
The dynamic SQL which will be built, should insert the values from source table into the target table. In case the PK already exists, in target table, should be checked the field which represents the last updated date and if in source table it is bigger than in target table, then perform an update in target table, else, do nothing.

multiple select query with same where clause

I have two type of select statement with the same complicated where clause.
One - Returns transaction detailed data
select field_1, field_2, field_3, ... , field_30 from my_table where my_where_clause
Second - Returns transaction data grouped by(distinct) merchants
select distinct field_1, field_2, field_8 from my_table where my_where_clause
Statements are called separately.
I want to simplify my code and not to repeat this complicated where clause in both statements without loosing performance
In dynamic SQL it's possible but I don't want to use dynamic SQL.
Any suggestions?
Suggestion: you can try GROUPING SETS expression.
It allows you to selectively specify the set of groups that you want to create within a GROUP BY clause. In
In your case, you can specify 2 sets, one group by set for all fields from 1 to 30 and another set for fields 1,2&8.
Link- https://docs.oracle.com/cd/E40518_01/server.761/es_eql/src/reql_aggregation_grouping_sets.html
However, it will return the output of both the groups in a single resultset, not sure if this fits in your design.
So you could encapsulate this statement, in a view or function, e,g,:
create or replace view view_1 as
select field_1, field_2, field_3, ... , field_30
from my_table
where my_where_clause
Then your second query could be
select distinct * from view_1;
You said that you are using this query from java. Try this.
create or replace function get_cursor(p_type varchar2 default null/* other paramethers*/ ) return sys_refcursor
is
result_curosr sys_refcursor;
begin
open result_curosr for 'select '||p_type||' object_type,status from user_objects' /* where clausele */ ;
return result_curosr;
end;
And usage of this from java.
Connection con = ...
CallableStatement callableStatement = con.prepareCall("declare c sys_refcursor; begin ? := get_cursor(?); end ; ");
callableStatement.registerOutParameter(1, OracleTypes.CURSOR);
callableStatement.setString(2, "Distinct"); // for distinct
or
callableStatement.setNull(2, OracleTypes.VARCHAR); // for full results
callableStatement.executeUpdate();
ResultSet rs = (ResultSet) callableStatement.getObject(1);
while(rs.next()) {
System.err.println(rs.getString(1));
}
rs.close();
con.close();
Other solution.
Add one more parameter and do simple deduplication using all columns from query. But i don't see any advantages.
select object_type,status from
(select object_type,status, row_number() over( partition by object_type,status order by 1) rn from user_objects /* your_where_clusue */
) where rn = case when 'DISTIINCT'/* <- paramete here :isDistinct */ = 'DISTIINCT' then 1 else rn end;
You can make dynamic SQL more readable by using multi-line strings, alternative quotes, and templates.
declare
v_select varchar2(32767);
v_where varchar2(32767);
v_code varchar2(32767) := '
##SELECT##
##WHERE##
';
begin
--Populate the clauses.
if ... then
v_select := 'select field_1, field_2, field_3, ... , field_30 from my_table';
else
v_select := 'select distinct field_1, field_2, field_8 from my_table';
end if;
if ... then
v_where :=
q'[
where field_1 = 'foo'
and field_2 = :bind1
...
]';
else
v_where :=
q'[
where field_2 = 'bar'
and field_2 = :bind2
...
]';
end if;
--Fill in the code.
v_code := replace(v_code, '##SELECT##', v_select);
v_code := replace(v_code, '##WHERE##', v_where);
--Print the code to check the formatting. Remove after testing.
dbms_output.put_line(v_code);
--Run it.
execute immediate v_code using ...;
end;
/
It's not perfect but it prevents ugly concatenation. And it's much better than the anti-patterns needed to avoid dynamic SQL at all costs. In most languages features like polymorphism and reflection are better than dynamic code. PL/SQL does not have good support for those advanced features so it's usually better to build the code as a string.

Oracle: Pure PL/SQL data extraction and anonymization using temporary tables, read-only permissions

I am trying to create a PL/SQL script that extracts a root "object" together with all children and other relevant information from an oracle production database. The purpose is to create a set of test-data to recreate issues that are encountered in production. Due to data protection laws the data needs to be anonymized when extracted - object names, certain types of id's, and monetary amounts need to be replaced.
I was trying to create one or more temporary translation tables, which would contain both the original values and anonymized versions. Then I would join the real data with the translation tables and output the anonymized values wherever required.
DECLARE
rootId integer := 123456;
TYPE anonTableRow IS RECORD
(
id NUMBER,
fieldC NUMBER,
anonymizedFieldC NUMBER
);
TYPE anonTable IS TABLE OF anonTableRow;
anonObject anonTable;
BEGIN
FOR cursor_row IN
(
select
id,
fieldC,
1234 -- Here I would create anonymized values based on rowNum or something similar
from
prodTable
where id = rootId
)
LOOP
i := i + 1;
anonObject(i) := cursor_row;
END LOOP;
FOR cursor_row IN
(
select
prod_table.id,
prod_table.fieldB,
temp_table.anonymizedFieldC fieldC,
prod_table.fieldD
from
prod_table
inner join table(temp_table) on prod_table.id = temp_table.id
where prod_table.id = 123456789
)
LOOP
dbms_output.put_line('INSERT INTO prod_table VALUES (' || cursor_row.id || ', ' || cursor_row.fieldB || ', ' || cursor_row.fieldC || ', , ' || cursor_row.fieldD);
END LOOP;
END;
/
However I ran into several problems with this approach - it seems to be near impossible to join oracle PL/SQL tables with real database tables. My access to the production database is severely restricted, so I cannot create global temporary tables, declare types outside PL/SQL or anything of that sort.
My attempt to declare my own PL/SQL types failed with the problems mentioned in this question - the solution does not work for me because of the limited permissions.
Is there a pure PL/SQL way that does not require fancy permissions to achieve something like the above?
Please Note: The above code example is simplified a lot and would not really require a separate translation table - in reality I need access to the original and translated values in several different queries, so I would prefer not having to "recalculate" translations everywhere.
If your data is properly normalized, then I guess this should only be necessary for internal IDs (not sure why you need to translate them though).
The following code should work for you, keeping the mappings in Associative Arrays:
DECLARE
TYPE t_number_mapping IS TABLE OF PLS_INTEGER INDEX BY PLS_INTEGER;
mapping_field_c t_number_mapping;
BEGIN
-- Prepare mapping
FOR cur IN (
SELECT 101 AS field_c FROM dual UNION ALL SELECT 102 FROM dual -- test-data
) LOOP
mapping_field_c(cur.field_c) := mapping_field_c.COUNT; -- first entry mapped to 1
END LOOP;
-- Use mapping
FOR cur IN (
SELECT 101 AS field_c FROM dual UNION ALL SELECT 102 FROM dual -- test-data
) LOOP
-- You can use the mapping when generating the `INSERT` statement
dbms_output.put_line( cur.field_c || ' mapped to ' || mapping_field_c(cur.field_c) );
END LOOP;
END;
Output:
101 mapped to 1
102 mapped to 2
If this isn't a permanent piece of production code, how about "borrowing" an existing collection type - e.g. one define in SYS that you can access.
Using this script from your schema you can generate a SQL Plus script to describe all SYS-owned types:
select 'desc ' || type_name from all_types
where typecode = 'COLLECTION'
and owner = 'SYS';
Running the resulting script will show you the structure of all the ones you can access. This one looks potentially suitable for example:
SQL> desc KU$_PARAMVALUES1010
KU$_PARAMVALUES1010 TABLE OF SYS.KU$_PARAMVALUE1010
Name Null? Type
----------------------------------------- -------- ----------------------------
PARAM_NAME VARCHAR2(30)
PARAM_OP VARCHAR2(30)
PARAM_TYPE VARCHAR2(30)
PARAM_LENGTH NUMBER
PARAM_VALUE_N NUMBER
PARAM_VALUE_T VARCHAR2(4000)
Of course, you can't guarantee that type will still exist or be the same or be accessible to you after a database upgrade, hence my caveat at the start.
More generic way to achieve this goal.
In my example i'm using xquery flwor expressions and dbms_xmlstore. Knowledge about xquery is mandatory.
create table mask_user_objects as select * from user_objects where rownum <0;
declare
v_s_table varchar2(30) := 'USER_OBJECTS'; --uppercase!!!
v_d_table varchar2(30) := 'MASK_USER_OBJECTS'; --uppercase!!!
v_mask_columns xmltype := xmltype('<COLS><OBJECT_NAME>XXXX</OBJECT_NAME>
<DATA_OBJECT_ID>-1</DATA_OBJECT_ID>
<OBJECT_TYPE/>
</COLS>'); --uppercase!!!
insCtx DBMS_XMLSTORE.ctxType;
r NUMBER;
v_source_table xmltype;
v_cursor sys_refcursor;
begin
open v_cursor for 'select * from '||v_s_table||' where rownum <100 ';
v_source_table := xmltype(v_cursor);
close v_cursor;
-- Load source table into xmltype.
insCtx := DBMS_XMLSTORE.newContext(v_d_table); -- Get saved context
for rec in (
select tt.column_value from xmltable('
let $col := $anomyze/COLS
for $i in $doc/ROWSET/ROW
let $row := $i
return <ROWSET>
<ROW>
{
for $x in $row/*
return if(
exists($col/*[name() = $x/name()] )
) then element{$x/name()}{ $col/*[name() = $x/name()]/text() }
else element{$x/name()}{$x/text()}
}
</ROW>
</ROWSET>
'
passing v_source_table as "doc"
, v_mask_columns as "anomyze"
) tt) loop
null;
r := DBMS_XMLSTORE.insertXML(insCtx, rec.column_value);
end loop;
DBMS_XMLSTORE.closeContext(insCtx);
end;

Using string in Oracle stored procedure

When I simply write a query in which it has code like
Select * from ..
where ...
AND gfcid in ( select regexp_substr('1005771621,1001035181'||',','\d+',1,level)
from dual
connect by level <= (select max(length('1005771621,1001035181')-length(replace('1005771621,1001035181',',')))+1
from dual) )
It works.
But I want to make it dynamic query in oracle stored procedure. I did like this:
GDFCID_STRING := ' select regexp_substr('
|| '1005771621,1001035181'
|| ','
|| ','
|| '\d+'
|| ',1,level) from dual connect by level <= (select max(length('
|| '1005771621,1001035181'
|| ')-length(replace('
|| '1005771621,1001035181'
|| ','
|| ','
|| ')))+1 from dual)';
Select * from ..
where ...
AND gfcid in (GDFCID_STRING)
But it does now work.
As far as I understand your problem, you need a method to accept a comma-delimited string as an input, break it into a collection of integers and then compare a number (read: integer) with the values in this collection.
Oracle offers mainly three types of collections- varrays, nested tables and associative arrays. I would explain how to convert a comma-delimited string into a nested table and use it to query or compare.
First, you need to define an object type in the schema. You can write queries using this type only if you define it at schema level.
CREATE OR REPLACE TYPE entity_id AS OBJECT (id_val NUMBER(28));
/
CREATE OR REPLACE TYPE entity_id_set IS TABLE OF entity_id;
/
Next, define a function like this:
FUNCTION comma_to_nt_integer (p_comma_delimited_str IN VARCHAR)
RETURN entity_id_set IS
v_table entity_id_set;
BEGIN
WITH temp AS (SELECT TRIM(BOTH ',' FROM p_comma_delimited_str) AS str FROM DUAL)
SELECT ENTITY_ID(TRIM (REGEXP_SUBSTR (t.str,
'[^,]+',
1,
LEVEL)))
str
BULK COLLECT INTO v_table
FROM temp t
CONNECT BY INSTR (str,
',',
1,
LEVEL - 1) > 0;
RETURN v_table;
END comma_to_nt_integer;
You are done with the DDL required for this task. Now, you can simply write your query as:
SELECT *
FROM ..
WHERE ...
AND gfcid in (table(comma_to_nt_integer(GDFCID_STRING)));
In general you can use
execute immediate v_your_sql_code;
to execute dynamic SQL within PL/SQL, but from your question I'm not really aware what you want to do.
Edit:
y_your_sql_code := 'Select yourColumn from .. where ...AND gfcid in ('||GDFCID_STRING||')';
execute immediate v_your_sql_code into v_result;
You'll have to define v_result in the right datatype, you could use more then one result variable if you need more result columns, you'll need i.e. a complex type if you can retrieve more than one row.

Resources