Search string to match Data inside of all existing columns and rows from Views in Oracle SQL - oracle

The code below is looking for a string to match a column name. However I would like to search for a string to match Data (meaning, search on each existing column and row from all views - not column names). I want the results to show me all Views Names that contain that string on their Data. Hope this makes sense.
begin
dbms_output.put_line('Owner View name');
dbms_output.put_line('------------------------------ -------------------------------');
for r in (
select v.owner, v.view_name, v.text
from all_views v
where v.owner <> 'SYS'
)
loop
if lower(r.text) like '%my_String%' then
dbms_output.put_line(rpad(r.owner,31) || r.view_name);
end if;
end loop;
end;

I suggest you at least review some of the Oracle documentation. Your statement "Oracle SQL... i am using Oracle SQL Developer" indicates a lack of understanding. So lets clear some of that up. You are actually using 3 separate software tools:
Oracle SQL - all access to data being stored in your Oracle database.
Oracle Pl/SQL - the programming language, which tightly integrates with but is still separate from SQL. This is the language in which your script is written.
Oracle SQL Developer - an IDE within which you develop, run, etc run your Pl/SQL scripts and/or SQL statements.
Now as for your script. The syntax is fine and it will run, successfully retrieving all non sys owned views and printing the names of those containing the specified search string. However, as it currently stands it'll never find a match. Your match statement attempts to match a lower case string to a string that contains the character 'S' (upper case). There will never be a match. Also, keep in mind that within LIKE the underscore (_) is a single character wild card. You may need to escape it as "like '%my_ ...'". With these in mind we come to:
REVISED
The requirement to actually find the string in view columns completely changes things from your original query. Although the title does suggest that was actually the initial desire. What you want to accomplish is much more complex and cannot be done in SQL alone; it requires PL/SQL or an external language code. The reason is that you need run select but you don't know against what nor what columns nor even how many columns area needed. For that you need to identify the view, isolate the viable columns and generate the appropriate SQL dynamically then actually execute that sql and check the results.
Two approaches come to mind: Parse the view to extract the columns (something I would even consider doing in PL/SQL) or join ALL_VIEWS with ALL_TAB_COLUMNS (All table columns). That we'll do. The majority of the code will be constructing the SQL and the necessary error handling dynamic SQL essentially forces on you. I've created this a a procedure with 2 parameters: the target string, and the schema.
create or replace procedure find_string_in_view(
schema_name_in varchar2
, target_string_in varchar2
)
is
--- set up components for eventual execution
k_new_line constant varchar2(2) := chr(10);
k_base_statement constant varchar2(50) :=
q'!select count(*) from <view_name> where 1=1 and !' || k_new_line;
k_where_statement constant varchar2(50) :=
q'! <column_name> like '%<target>%' !' || k_new_line;
k_limit_statement constant varchar2(20) :=
' ) and rownum < 2';
k_max_allowed_errors constant integer := 3;
--- cursor for views and column names
cursor c_view_columns is
(select av.owner,av.view_name , atc.column_name
, lead(atc.column_name) over (partition by av.view_name order by atc.column_id) next_col
, lag(atc.column_name) over (partition by av.view_name order by atc.column_id) prev_col
from all_views av
join all_tab_columns atc
on ( atc.owner = av.owner
and atc.table_name = av.view_name
)
where av.owner = upper(schema_name_in)
and atc.data_type in
('CHAR', 'NCHAR', 'VARCHAR2','NVARCHAR2','VARCHAR','NVARCHAR')
) ;
--- local variables
m_current_view varchar2(61);
m_sql_errors integer := 0;
m_where_connector varchar(2);
m_sql_statement varchar2(4000);
-- local helper function
function view_has_string
return boolean
is
l_item_count integer := 0;
begin
execute immediate m_sql_statement into l_item_count;
return (l_item_count > 0);
exception
when others then
dbms_output.put_line(rpad('-',61,'-') || k_new_line);
dbms_output.put_line('Error processing:: ' || m_current_view);
dbms_output.put_line('Statement::' || k_new_line || m_sql_statement);
dbms_output.put_line(sqlerrm);
m_sql_errors := m_sql_errors + 1;
if m_sql_errors >= k_max_allowed_errors
then
raise_application_error(-20199,'Maximun errors allowed reach. Terminating');
end if;
return false;
end view_has_string;
begin -- MAIN --
dbms_output.put_line('Owner View name');
dbms_output.put_line('------------------------------ -------------------------------');
for rec in c_view_columns
loop
if rec.prev_col is null
then
m_current_view := rec.owner || '.' || rec.view_name;
m_sql_statement := replace(k_base_statement,'<view_name>',m_current_view);
m_where_connector := ' (';
end if;
m_sql_statement := m_sql_statement || m_where_connector
|| replace(k_where_statement,'<column_name>',rec.column_name);
m_where_connector := 'or' ;
if rec.next_col is null
then
m_sql_statement := replace(m_sql_statement,'<target>',target_string_in);
m_sql_statement := m_sql_statement || k_limit_statement;
if view_has_string
then
dbms_output.put_line(rpad(rec.owner,31) || rec.view_name);
end if;
end if;
end loop;
end find_string_in_view;

select v.owner, v.view_name, v.text
from all_views v
where v.owner <> 'SYS' and lower(v.text) like '%my_String%'
one SQL do this?

Related

DBMS_SQL.EXECUTE not giving output when the SQL have DBMS_XMLGEN.GETXML

I am generating a dynamic SQL which generates XML data, using SELECT DBMS_XMLGEN.GETXML, as the output have special characters,
which are not supported in XML. I tried and tested SQL in separately, It is working as expected.
I am not able to fetch the data, when I call it using Dynamic SQL.
I have used DBMS_SQL.DESC_TAB2 as it is treating whole SQL starting DBMS_XMLGEN.GETXML as one column.
--Snippet
Declaration
SelectCursorId NUMBER; --For Dynamic SQL binding
RowProcessed INTEGER; --For Dynamic SQL binding
MstrSeqNbr NUMBER := 1;
ColumnCount NUMBER; --For Dynamic SQL binding
RecordSqrNbr NUMBER := 0; --For Dynamic SQL binding
ColumnDescTbl DBMS_SQL.DESC_TAB2; --dbms_sql.desc_tab2
ColumnValue VARCHAR2(4000); --For Dynamic SQL binding
DymanicSQLCols VARCHAR2(4000); -- For debugging purpose, columns returned
SelectSQL VARCHAR2(6000);
BEGIN
--Snippet
SelectSQL := 'SELECT DBMS_XMLGEN.GETXML('SELECT MRQ.BatchNBR AS Batch_NUMBER,
MRQ.BatchRUNSEQNBR AS Batch_RUN_INSTANCE,
MRQ.BatchRUNDATE AS RUN_DATE,
MRQ.SPDATE AS POST_DATE,
MRQ.BatchNAME AS Batch_NAME,
MRQ.RPTNAME AS DATABASE_NAME,
MRQ.EFFDATE AS RUN_TIME,
MRQ.BatchSTARTDATE AS ELAPSED_TIME ,
CURSOR ( SELECT MRI.MSTREPORTRECSEQNBR AS RECORD_SEQUENCE_NUMBER,
MRI.RTTEXT1VC100 AS COUNTRY_NAME,
MRI.RTTEXT2VC100 AS CURRENCY_USED,
MRI.RTTEXT1VC50 AS COUNTRY_SHORT_CODE,
MRI.RTNUM1P0 AS ISO_CURRENCY_CODE
FROM MasterReporting MRI
WHERE BatchNbr = MR.BatchNbr
AND BatchRUNSEQNBR = MR.BatchRUNINSTANCE
ORDER BY MSTREPORTRECSEQNBR )Record
FROM BatchRUNHIST MR , MastRptSeqDtl MRQ
WHERE MR.BatchNbr = 100
AND MR.BatchRUNINSTANCE IN( 67)
AND MRQ.BatchRUNSEQNBR = MR.BatchRUNINSTANCE
AND MR.BatchRunStatCD =''COMPL''
')
FROM DUAL ';
SelectCursorId := DBMS_SQL.OPEN_CURSOR; --Pass
DBMS_SQL.PARSE ( SelectCursorId, SelectSQL, DBMS_SQL.NATIVE); --Pass
DBMS_SQL.DESCRIBE_COLUMNS2( SelectCursorId, ColumnCount, ColumnDescTbl); --Pass
** However ColumnDescTbl(1).col_name is only giving following, not sure if this is the issue
DBMS_XMLGEN.GETXML('SELECT MRQ.BatchNBRASBatch_NUMBER, MRQ.BatchRUNSEQNBRASBatch_RUN_INSTANCE, MRQ.BatchRUNDATEASRUN_DATE, MRQ.SPDATEASPOST_DATE, MRQ.BatchNAMEASBatch_NAME, MRQ.RPTNAMEASDATABAS
Next step also passes
For k in 1..ColumnCount LOOP
DBMS_SQL.DEFINE_COLUMN(SelectCursorId, k, ColumnValue, 4000); --Pass
END LOOP;
RowProcessed := DBMS_SQL.EXECUTE(SelectCursorId);
--Passes but gives 0 as output,
--Whereas, running the SQL separately gives you one row of XML data.
Minimum One row of XML Data should be returned.
From the documentation for dbms_sql.execute:
The return value is only valid for INSERT, UPDATE, and DELETE statements; for other types of statements, including DDL, the return value is undefined and must be ignored.
As your statement is a select the RowProcessed return value has no meaning; it isn't significant that you're seeing zero there.
You need to do a fetch_rows step after that, or change it to execute_and_fetch. Either way you then need to process the fetched data, of course.
(The col_name looks like a non-issue, it's just an automatically generated name/alias. If you change your dynamic statement to do ...) AS my_col_alias FROM DUAL then the col_name will be reported as MY_COL_ALIAS.)

Stored procedure in Oracle with three OUT parameters from different queries

I'm having trouble creating a stored procedure with one IN and three OUT parameters. Basically I have three queries which I want to put in a stored procedure.
How can I relate the OUT paramters with each of the queries?
CREATE OR REPLACE PROCEDURE 'SP_NAME'(
#phone_number IN VARCHAR2
REG OUT INTEGER
EMAIL OUT VARCHAR2
VALID OUT INTEGER )
AS
BEGIN
--I just need 1 or 0 to valid if exists on the table
SELECT COUNT (phone_number) FROM users
WHERE phone_number = #phone_number
-- this should bring me the email a different table
SELECT email FROM details_users
WHERE phone_number = #phone_number
-- and this bring if exists in oooother table
SELECT count(phone_number) FROM suscribers
WHERE phone_number = #phone_number
END;
Your revised business logic features three tables. In the absence of any indication that the tables contents are related it's better to keep them as separate queries.
Here is my solution (featuring valid Oracle syntax).
CREATE OR REPLACE PROCEDURE SP_NAME (
p_phone_number IN VARCHAR2
, p_REG OUT INTEGER
, p_EMAIL OUT VARCHAR2
, p_VALID OUT INTEGER )
AS
l_usr_cnt pls_integer;
l_email details_users.email%type;
l_sub_cnt pls_integer;
BEGIN
SELECT COUNT (*) into l_usr_cnt
FROM users u
WHERE u.phone_number = p_phone_number;
begin
select du.email into l_email
from details_users du
where du.phone_number = p_phone_number;
exception
when others then
l_email := null;
end;
SELECT COUNT (*) into l_sub_cnt
FROM suscribers s -- probably a typo :-/
WHERE s.phone_number = p_phone_number;
if l_usr_cnt != 0 then
p_reg := 1; -- "true"
else
p_reg := 0; -- "false"
end if;
p_email := l_email;
if l_sub_cnt != 0 then
p_valid := 1;
else
p_valid := 0;
end if;
END;
Note it is permitted to populate an OUT parameter directly from a query, like this:
SELECT COUNT (*)
into p_reg
FROM users u
WHERE u.phone_number = p_phone_number;
However, it is accepted good practice to work with local variables and only assign OUT parameters at the end of the procedure. This is to ensure consistent state is passed to the calling program, (which matters particularly if the called procedure throws an exception).
Also, I prefixed all the parameter names with p_. This is also optional but good practice. Having distinct namespaces is always safer, but it especially matters when the parameter names would otherwise match table column names (i.e. phone_number).
You may use below queries then assign the result to each of the out parameters. I notice that 1st and 3rd query are the same. Is it copy/paste error? Or you mean phone number is valid or not?
SELECT COUNT (phone_number) INTO REG FROM users
WHERE phone_number = #phone_number;
SELECT email INTO EMAIL FROM users
WHERE phone_number = #phone_number;
REG := VALID;

Oracle: Pure PL/SQL data extraction and anonymization using temporary tables, read-only permissions

I am trying to create a PL/SQL script that extracts a root "object" together with all children and other relevant information from an oracle production database. The purpose is to create a set of test-data to recreate issues that are encountered in production. Due to data protection laws the data needs to be anonymized when extracted - object names, certain types of id's, and monetary amounts need to be replaced.
I was trying to create one or more temporary translation tables, which would contain both the original values and anonymized versions. Then I would join the real data with the translation tables and output the anonymized values wherever required.
DECLARE
rootId integer := 123456;
TYPE anonTableRow IS RECORD
(
id NUMBER,
fieldC NUMBER,
anonymizedFieldC NUMBER
);
TYPE anonTable IS TABLE OF anonTableRow;
anonObject anonTable;
BEGIN
FOR cursor_row IN
(
select
id,
fieldC,
1234 -- Here I would create anonymized values based on rowNum or something similar
from
prodTable
where id = rootId
)
LOOP
i := i + 1;
anonObject(i) := cursor_row;
END LOOP;
FOR cursor_row IN
(
select
prod_table.id,
prod_table.fieldB,
temp_table.anonymizedFieldC fieldC,
prod_table.fieldD
from
prod_table
inner join table(temp_table) on prod_table.id = temp_table.id
where prod_table.id = 123456789
)
LOOP
dbms_output.put_line('INSERT INTO prod_table VALUES (' || cursor_row.id || ', ' || cursor_row.fieldB || ', ' || cursor_row.fieldC || ', , ' || cursor_row.fieldD);
END LOOP;
END;
/
However I ran into several problems with this approach - it seems to be near impossible to join oracle PL/SQL tables with real database tables. My access to the production database is severely restricted, so I cannot create global temporary tables, declare types outside PL/SQL or anything of that sort.
My attempt to declare my own PL/SQL types failed with the problems mentioned in this question - the solution does not work for me because of the limited permissions.
Is there a pure PL/SQL way that does not require fancy permissions to achieve something like the above?
Please Note: The above code example is simplified a lot and would not really require a separate translation table - in reality I need access to the original and translated values in several different queries, so I would prefer not having to "recalculate" translations everywhere.
If your data is properly normalized, then I guess this should only be necessary for internal IDs (not sure why you need to translate them though).
The following code should work for you, keeping the mappings in Associative Arrays:
DECLARE
TYPE t_number_mapping IS TABLE OF PLS_INTEGER INDEX BY PLS_INTEGER;
mapping_field_c t_number_mapping;
BEGIN
-- Prepare mapping
FOR cur IN (
SELECT 101 AS field_c FROM dual UNION ALL SELECT 102 FROM dual -- test-data
) LOOP
mapping_field_c(cur.field_c) := mapping_field_c.COUNT; -- first entry mapped to 1
END LOOP;
-- Use mapping
FOR cur IN (
SELECT 101 AS field_c FROM dual UNION ALL SELECT 102 FROM dual -- test-data
) LOOP
-- You can use the mapping when generating the `INSERT` statement
dbms_output.put_line( cur.field_c || ' mapped to ' || mapping_field_c(cur.field_c) );
END LOOP;
END;
Output:
101 mapped to 1
102 mapped to 2
If this isn't a permanent piece of production code, how about "borrowing" an existing collection type - e.g. one define in SYS that you can access.
Using this script from your schema you can generate a SQL Plus script to describe all SYS-owned types:
select 'desc ' || type_name from all_types
where typecode = 'COLLECTION'
and owner = 'SYS';
Running the resulting script will show you the structure of all the ones you can access. This one looks potentially suitable for example:
SQL> desc KU$_PARAMVALUES1010
KU$_PARAMVALUES1010 TABLE OF SYS.KU$_PARAMVALUE1010
Name Null? Type
----------------------------------------- -------- ----------------------------
PARAM_NAME VARCHAR2(30)
PARAM_OP VARCHAR2(30)
PARAM_TYPE VARCHAR2(30)
PARAM_LENGTH NUMBER
PARAM_VALUE_N NUMBER
PARAM_VALUE_T VARCHAR2(4000)
Of course, you can't guarantee that type will still exist or be the same or be accessible to you after a database upgrade, hence my caveat at the start.
More generic way to achieve this goal.
In my example i'm using xquery flwor expressions and dbms_xmlstore. Knowledge about xquery is mandatory.
create table mask_user_objects as select * from user_objects where rownum <0;
declare
v_s_table varchar2(30) := 'USER_OBJECTS'; --uppercase!!!
v_d_table varchar2(30) := 'MASK_USER_OBJECTS'; --uppercase!!!
v_mask_columns xmltype := xmltype('<COLS><OBJECT_NAME>XXXX</OBJECT_NAME>
<DATA_OBJECT_ID>-1</DATA_OBJECT_ID>
<OBJECT_TYPE/>
</COLS>'); --uppercase!!!
insCtx DBMS_XMLSTORE.ctxType;
r NUMBER;
v_source_table xmltype;
v_cursor sys_refcursor;
begin
open v_cursor for 'select * from '||v_s_table||' where rownum <100 ';
v_source_table := xmltype(v_cursor);
close v_cursor;
-- Load source table into xmltype.
insCtx := DBMS_XMLSTORE.newContext(v_d_table); -- Get saved context
for rec in (
select tt.column_value from xmltable('
let $col := $anomyze/COLS
for $i in $doc/ROWSET/ROW
let $row := $i
return <ROWSET>
<ROW>
{
for $x in $row/*
return if(
exists($col/*[name() = $x/name()] )
) then element{$x/name()}{ $col/*[name() = $x/name()]/text() }
else element{$x/name()}{$x/text()}
}
</ROW>
</ROWSET>
'
passing v_source_table as "doc"
, v_mask_columns as "anomyze"
) tt) loop
null;
r := DBMS_XMLSTORE.insertXML(insCtx, rec.column_value);
end loop;
DBMS_XMLSTORE.closeContext(insCtx);
end;

PL/SQL - How to use an array in an IN Clause

I'm trying to use an array of input values to my procedure in an IN Clause as part of the where clause of a cursor. I know that this has been asked before, but I haven't seen how to make my syntax compile correctly.
In the package specification, the type is
TYPE t_brth_dt IS TABLE OF sourceTable.stdt_brth_dt%TYPE INDEX BY PLS_INTEGER;
sourceTable.std_brth_dt is a date column in the table.
Simplified version of my cursor is in the package body is -
cursor DataCursor_Sort( p_brth_dt in t_brth_dt) is
SELECT *
FROM sourceTable
WHERE a.brth_dt IN (select column_value
from table(p_brth_dt))
When I try to compile this, I'm getting the following errors.
[1]:(Error): PLS-00382: expression is of wrong type
[2]:(Error): PL/SQL: ORA-22905: cannot access rows from a non-nested table item
I know this looks similar to other questions, but I don't understand what the syntax error is.
In order to use collection defined as a nested table or an associative array in the from clause of a query you either should, as #Alex Poole correctly pointed out, create a schema level (SQL) type or use one, that is available to you trough ODCIConst package - odcidatelist as you intend to use a list of dates. For example, your cursor definition might look like this:
cursor DataCursor_Sort(p_brth_dt in sys.odcidatelist) is
select *
from sourceTable
where a.brth_dt IN (select column_value
from table(p_brth_dt))
OR
cursor DataCursor_Sort(p_brth_dt in sys.odcidatelist) is
select s.*
from sourceTable s
join table(p_brth_dt) t
on (s.brth_dt = t.column_value)
Note: You should take into consideration the time part of a date when performing a date comparison. If you want to compare date part only it probably would be useful to get rid of time part by using trunc() function.
It is possible to use a PL/SQL-defined nested table type (as opposed to a SQL-defined nested table type) indirectly in an IN clause of a SELECT statement in a PL/SQL package. You must use a PIPELINED function as an intermediary. It felt kind of clever to write, but I don't believe in its fundamental usefulness.
CREATE OR REPLACE PACKAGE so18989249 IS
TYPE date_plsql_nested_table_type IS TABLE OF DATE;
dates date_plsql_nested_table_type;
FUNCTION dates_pipelined RETURN date_plsql_nested_table_type PIPELINED;
PROCEDURE use_plsql_nested_table_type;
END so18989249;
/
CREATE OR REPLACE PACKAGE BODY so18989249 IS
FUNCTION dates_pipelined RETURN date_plsql_nested_table_type
PIPELINED IS
BEGIN
IF (dates.count > 0)
THEN
FOR i IN dates.first .. dates.last
LOOP
IF (dates.exists(i))
THEN
PIPE ROW(dates(i));
END IF;
END LOOP;
END IF;
END;
PROCEDURE use_plsql_nested_table_type IS
BEGIN
dates := NEW date_plsql_nested_table_type();
-- tweak these values as you see fit to produce the dbms_output results you want
dates.extend(5);
dates(1) := DATE '2013-12-25';
dates(2) := DATE '2013-01-01';
dates(3) := DATE '2013-07-01';
dates(4) := DATE '2013-09-03';
dates(5) := DATE '2008-11-18';
FOR i IN (SELECT o.owner,
o.object_name,
o.object_type,
to_char(o.last_ddl_time, 'YYYY-MM-DD') AS last_ddl
FROM all_objects o
WHERE trunc(o.last_ddl_time) IN
(SELECT column_value FROM TABLE(dates_pipelined))
--uses pipeline function which uses pl/sql-defined nested table
)
LOOP
dbms_output.put_line('"' || i.owner || '"."' || i.object_name || '" ("' || i.object_type || ') on ' || i.last_ddl);
END LOOP;
END;
END so18989249;
/
begin so18989249.use_plsql_nested_table_type; end;
/
The type has to be created at SQL level, not in a package. An SQL query doesn't know how to use any types defined in PL/SQL. So you'd have to do:
CREATE OR REPLACE TYPE t_brth_dt IS TABLE OF date;
/
... and remove the type from your package specification. (Or give them different names, at least, and they won't be interchangeable in use). Because it's at SQL level, you also can't use sourceTable.stdt_brth_dt%TYPE in the declaration, unfortunately.

Oracle PL\SQL Null Input Parameter WHERE condition

As of now I am using IF ELSE to handle this condition
IF INPUT_PARAM IS NOT NULL
SELECT ... FROM SOMETABLE WHERE COLUMN = INPUT_PARAM
ELSE
SELECT ... FROM SOMETABLE
Is there any better way to do this in a single query without IF ELSE loops. As the query gets complex there will be more input parameters like this and the amount of IF ELSE required would be too much.
One method would be to use a variant of
WHERE column = nvl(var, column)
There are two pitfalls here however:
if the column is nullable, this clause will filter null values whereas in your question you would not filter the null values in the second case. You could modify this clause to take nulls into account but it turns ugly:
WHERE nvl(column, impossible_value) = nvl(var, impossible_value)
Of course if somehow the impossible_value is ever inserted you will run into some other kind of (fun) problems.
The optimizer doesn't understand correctly this type of clause. It will sometimes produce a plan with a UNION ALL but if there are more than a couple of nvl, you will get full scan even if perfectly valid indexes are present.
This is why when there are lots of parameters (several search fields in a big form for example), I like to use dynamic SQL:
DECLARE
l_query VARCHAR2(32767) := 'SELECT ... JOIN ... WHERE 1 = 1';
BEGIN
IF param1 IS NOT NULL THEN
l_query := l_query || ' AND column1 = :p1';
ELSE
l_query := l_query || ' AND :p1 IS NULL';
END IF;
/* repeat for each parameter */
...
/* open the cursor dynamically */
OPEN your_ref_cursor FOR l_query USING param1 /*,param2...*/;
END;
You can also use EXECUTE IMMEDIATE l_query INTO l_result USING param1;
This should work
SELECT ... FROM SOMETABLE WHERE COLUMN = NVL( INPUT_PARAM, COLUMN )

Resources