Index and RLS on Oracle 10g - oracle

I have table data1 with 2 field: user_id and data_id. I have 2 indexes on user_id and data_id. They are a non unique indexed.
a function:
FUNCTION user_filter(p_schema IN VARCHAR2,
p_object IN VARCHAR2) RETURN VARCHAR2 IS
BEGIN
RETURN 'user_id='||session_pkg.user_id;
END;
I register this function as rls policy on data1:
DBMS_RLS.ADD_POLICY(OBJECT_SCHEMA => '',
OBJECT_NAME => 'data1',
POLICY_NAME => 'user_filter',
POLICY_FUNCTION => 'user_filter');
To have best performance, do I have to create 1 more Index like following?
create index data3_idx on data (user_ID, data_id);
Thanks,

In general it would be wasteful to have three indexes for two columns (data_id), (user_id,data_id) and (user_id) since Oracle can use the compound index for queries that filter on user_id and queries that filter on both columns.
In your case the DBMS_RLS.ADD_POLICY procedure will add the filter user_id=XX to all requests on this object. This means that you could replace the index on data_id with a more efficient compound index.

Related

Can a type table of table%rowtype be indexed by some field of table%rowtype?

Why to I asked this question?
I have a table which as key that have a lot of field. Every time, I'm making a jointure, I miss a field. Therefore I have defined a pipelined function that take the key as an argument so that I am sure that I get only one element when I'm doing a jointure.
But the query take more time now. The table a has an index on some fields but not the table type used by pipelined function. I would like to know if it is possible to created a index on some fields of the table%rowtype
code:
create table a ( a1 integer);
create package p_a
as
type t_a iS TABLE of a%ROWTYPE;
function f(i_a1 integer) return t_a pipelined;
end;
CREATE PACKAGE BODY p_a
AS
CURSOR c_A (i_a1 INTEGER)
RETURN a%ROWTYPE
IS
SELECT t.*
FROM a t
WHERE t.a1 = i_a1;
FUNCTION f (i_a1 INTEGER)
RETURN t_a
PIPELINED
IS
BEGIN
FOR c IN c_a (i_a1)
LOOP
PIPE ROW (c);
END LOOP;
END;
END;
with b as( select 1 b1 from dual) select * from b cross apply (table(p_a.f(b.b1)));
the question
I've tried to index the type table by a field of a table like this
create table a ( a1 integer);
create package p_a2
as
type t_a iS TABLE of a%ROWTYPE index by a.a1%type;
function f(i_a1 integer) return t_a pipelined;
end;
PLS-00315: Implementation restriction: unsupported table index type
Is what I want to do possible. If not how to solve the performance problems mentioned in the introduction?
code
A TYPE is NOT a table and cannot be indexed.
When you do:
create package p_a
as
type t_a iS TABLE of a%ROWTYPE;
end;
/
You are defining a type and the type is a collection data type; an instance of that type is NOT a physical table and but is more like an in-memory array.
When you create a PIPELINED function:
function f(i_a1 integer) return t_a pipelined;
It does NOT return a table; it returns the collection data type.
When you do:
type t_a iS TABLE of a%ROWTYPE index by a.a1%type;
You are NOT creating an index on a table; you are changing to a different collection data type that is an associative array (like a JavaScript object or a Python dictionary) that stores key-value pairs.
An associative array is a PL/SQL data type and (with limited exceptions in later versions for insert, update and delete statements) cannot be used in SQL statements.
When you do:
SELECT * FROM TABLE(SYS.ODCIVARCHAR2LIST('a', 'b', 'c'));
or:
SELECT * FROM TABLE(p_a.f(1));
Then you are passing a collection data type to an SQL statement and the table collection expression TABLE() is treating the collection expression as if it was a table. It is still NOT a table.
If you want to use an index on the table then use the table (without a cursor or a pipeline function):
WITH b (b1) AS (
SELECT 1 FROM DUAL
)
SELECT *
FROM b
CROSS APPLY (
SELECT a.*
FROM a
WHERE a.a1 = b.b1;
);
I think the first line of your question says it all: "key that have a lot of field". If I understand correctly, the table has a primary key that consists of a large number of columns and because of that writing queries becomes a challenge.
It sounds like you're trying to do something pretty complex that should not be an issue at all.
Take a step back and ask yourself - does this need to be the primary key of the table ? Or can you use a surrogate key (identity column, sequence), use that as the primary key and just create a unique index on the set of field that currently make up the primary key. It will (1) simplify your data model and (2) make writing the queries a lot easier.

Combine Oracle CONTEXT index and function-based index

I have created the following PRODUCTS table in the Oracle 11g EE database:
( ID NUMBER(10) NOT NULL PRIMARY KEY
,NAME VARCHAR2(100) NOT NULL
,DESCRIPTION VARCHAR2(4000) NOT NULL
,CREATED DATE NOT NULL
,CHANGED DATE );
For this table, the following index of type CTXSYS.CONTEXT was created on the DESCRIPTION field:
BEGIN
CTX_DDL.CREATE_PREFERENCE ('MY_LEXER', 'BASIC_LEXER');
CTX_DDL.SET_ATTRIBUTE ('MY_LEXER', 'BASE_LETTER', 'YES');
END;
/
CREATE INDEX PROD_DESCR_TXT_IDX ON PRODUCTS(DESCRIPTION) INDEXTYPE IS CTXSYS.CONTEXT PARAMETERS ('LEXER MY_LEXER');
BEGIN
CTX_DDL.SYNC_INDEX('DOC_DESCR_TXT_IDX', '2M');
END;
/
ANALYZE TABLE DOCUMENTOS_GED COMPUTE STATISTICS;
This table has many thousands of rows, and I need to query the CONTEXT index to return the rows in descending order by the date of the last change (field CHANGED) and, if this field is null, by the date of creation (field CREATED).
I would like to know if anyone can help me create an index that meets the search criteria for CONTEXT and at the same time bring the results ordered by the "NVL (CHANGED, CREATED) DESC" function, in order to make the result of the following command instantaneous:
SELECT *
FROM PRODUCTS
WHERE CONTAINS (DESCRIPTION, 'kitchen AND accessories') > 0
ORDER BY NVL(CHANGED, CREATED) DESC;
Thanks in advance for your help

execute SQL query stored in a table

I have a table having one of the columns that stores SQL query returning ids or it stores comma separated ids.
create table to store query or ids(separated by ,)
create table test1
(
name varchar(20) primary key,
stmt_or_value varchar(500),
type varchar(50)
);
insert into test1 (name, stmt_or_value, type)
values ('first', 'select id from data where id = 1;','SQL_QUERY')
insert into test1 (name, stmt_or_value, type)
values ('second', '1,2,3,4','VALUE')
data table is as follows
create table data
(
id number,
subject varchar(500)
);
insert into data (id, subject) values (1, 'test subject1');
insert into data (id, subject) values (2, 'test subject2');
insert into data (id, subject) values (3, 'test subject2');
I am not able to formulate query that will return values after either executing stored sql or parsing stored ids based on the value of name.
select id, subject
from data
where id in( EXECUTE IMMEDIATE stmt_or_value
where type='SQL_QUERY'
and name = 'first') or
( parse and return ids
from stmt_or_value
where type='VALUE'
and name = 'second')
Could you please help me in this.
Parsing comma separated value is done, I basically need help in below first part of the query:
( EXECUTE IMMEDIATE stmt_or_value
where type='SQL_QUERY'
and name = 'first')
This seems a very peculiar requirement, and one which will be difficult to solve in a robust fashion. STMT_OR_VALUE is the embodiment of the One Column Two Usages anti-pattern. Furthermore, resolving STMT_OR_VALUE requires flow control logic and the use of dynamic SQL. Consequently it cannot be a pure SQL solution: you need to use PL/SQL to assemble and execute the dynamic query.
Here is a proof of concept for a solution. I have opted for a function which you can call from SQL. It depends on one assumption: every query string you insert into TEST1.STMT_OR_VALUE has a projection of a single numeric column and every value string is a CSV of numeric data only. With this proviso it is simple to construct a function which either executes a dynamic query or tokenizes the string into a series of numbers; both of which are bulk collected into a nested table:
create or replace function get_ids (p_name in test1.name%type)
return sys.odcinumberlist
is
l_rec test1%rowtype;
return_value sys.odcinumberlist;
begin
select * into l_rec
from test1
where name = p_name;
if l_rec.type = 'SQL_QUERY' then
-- execute a query
execute immediate l_rec.stmt_or_value
bulk collect into return_value;
else
-- tokenize a string
select xmltab.tkn
bulk collect into return_value
from ( select l_rec.stmt_or_value from dual) t
, xmltable( 'for $text in ora:tokenize($in, ",") return $text'
passing stmt_or_value as "in"
columns tkn number path '.'
) xmltab;
end if;
return return_value;
end;
/
Note there is more than one way of executing a dynamic SQL statement and a multiplicity of ways to tokenize a CSV into a series of numbers. My decisions are arbitrary: feel free to substitute your preferred methods here.
This function can be invoked with a table() call:
select *
from data
where id in ( select * from table(get_ids('first'))) -- execute query
or id in ( select * from table(get_ids('second'))) -- get string of values
/
The big benefit of this approach is it encapsulates the logic around the evaluation of STMT_OR_VALUE and hides use of Dynamic SQL. Consequently it is easy to employ it in any SQL statement whilst retaining readability, or to add further mechanisms for generating a set of IDs.
However, this solution is brittle. It will only work if the values in the test1 table obey the rules. That is, not only must they be convertible to a stream of single numbers but the SQL statements must be valid and executable by EXECUTE IMMEDIATE. For instance, the trailing semi-colon in the question's sample data is invalid and would cause EXECUTE IMMEDIATE to hurl. Dynamic SQL is hard not least because it converts compilation errors into runtime errors.
Following is the set up data used for this example:
create table test1
(
test_id number primary key,
stmt_or_value varchar(500),
test_type varchar(50)
);
insert into test1 (test_id, stmt_or_value, test_type)
values (1, 'select id from data where id = 1','SQL_QUERY');
insert into test1 (test_id, stmt_or_value, test_type)
values (2, '1,2,3,4','VALUE');
insert into test1 (test_id, stmt_or_value, test_type)
values (3, 'select id from data where id = 5','SQL_QUERY');
insert into test1 (test_id, stmt_or_value, test_type)
values (4, '3,4,5,6','VALUE');
select * from test1;
TEST_ID STMT_OR_VALUE TEST_TYPE
1 select id from data where id = 1 SQL_QUERY
2 1,2,3,4 VALUE
3 select id from data where id = 5 SQL_QUERY
4 3,4,5,6 VALUE
create table data
(
id number,
subject varchar(500)
);
insert into data (id, subject) values (1, 'test subject1');
insert into data (id, subject) values (2, 'test subject2');
insert into data (id, subject) values (3, 'test subject3');
insert into data (id, subject) values (4, 'test subject4');
insert into data (id, subject) values (5, 'test subject5');
select * from data;
ID SUBJECT
1 test subject1
2 test subject2
3 test subject3
4 test subject4
5 test subject5
Below is the solution:
declare
sql_stmt clob; --to store the dynamic sql
type o_rec_typ is record(id data.id%type, subject data.subject%type);
type o_tab_typ is table of o_rec_typ;
o_tab o_tab_typ; --to store the output records
begin
--The below SELECT query generates the required dynamic SQL
with stmts as (
select (listagg(stmt_or_value, ' union all ') within group(order by stmt_or_value))||' union all ' s
from test1 t
where test_type = 'SQL_QUERY')
select
q'{select id, subject
from data
where id in (}'||
nullif(s,' union all ')||q'{
select distinct to_number(regexp_substr(s, '[^,]+', 1, l)) id
from (
select level l,
s
from (select listagg(stmt_or_value,',') within group(order by stmt_or_value) s
from test1
where test_type = 'VALUE') inp
connect by level <= length (regexp_replace(s, '[^,]+')) + 1))}' stmt into sql_stmt
from stmts; -- Create the dynamic SQL and store it into the clob variable
--execute the statement, fetch and display the output
execute immediate sql_stmt bulk collect into o_tab;
for i in o_tab.first..o_tab.last
loop
dbms_output.put_line('id: '||o_tab(i).id||' subject: '||o_tab(i).subject);
end loop;
end;
Output:
id: 1 subject: test subject1
id: 2 subject: test subject2
id: 3 subject: test subject3
id: 4 subject: test subject4
id: 5 subject: test subject5
Learnings:
Avoid using key words for table and column names.
Design application tables effectively to serve current and reasonable future requirements.
The above SQL will work. Still it is wise to consider reviewing the table design because, the complexity of the code will keep increasing with changes in requirements in future.
Learned how to convert comma separated values into records. "https://asktom.oracle.com/pls/apex/f?p=100:11:::NO::P11_QUESTION_ID:9538583800346706523"
declare
my_sql varchar2(1000);
v_num number;
v_num1 number;
begin
select stmt_or_value into my_sql from test1 where ttype='SQL_QUERY';
execute immediate my_sql into v_num;
select id into v_num1 from data where id=v_num;
dbms_output.put_line(v_num1);
end;
Answer for part 1.Please check.

How to analyze a table using DBMS_STATS package in PL/SQL?

Here's the code I'm working on:
begin
DBMS_STATS.GATHER_TABLE_STATS (ownname => 'appdata' ,
tabname => 'TRANSACTIONS',
cascade => true,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt=>'for all indexed columns size 1',
granularity => 'ALL',
degree => 1);
end;
After executing the code, PL/SQL procedure successfully completed is displayed.
How to view the statistics for the particular table, analyzed by DBMS_STATS ?
You may see information in DBA_TABLES
SELECT *
FROM DBA_TABLES where table_name='TRANSACTIONS';
e.g. Column LAST_ANALYZED shows when it was last analyzed.
There are also information column by column in
SELECT * FROM all_tab_columns where table_name='TRANSACTIONS';
where you could find min value, max value, etc.

How to compare a local CLOB column against a CLOB column in a remote database instance

I want to verify that the data in 2 CLOB columns is the same on 2 different instances. If these were VARCHAR2 columns, I could use a MINUS or a join to determine if rows were in one instance or the other. Unfortunately, Oracle does not allow you to perform set operations on CLOB columns.
How do I compare 2 CLOB columns, one of which is in my local instance and one that is in a remote instance?
Example table structure:
CREATE OR REPLACE TABLE X.TEXT_TABLE
( ID VARCHAR2,
NAME VARCHAR2,
TEXT CLOB
);
You can use an Oracle global temporary table to pull the CLOBs over to your local instance temporarily. You can then use the DBMS_LOB.COMPARE function to compare the CLOB columns.
If this query returns any rows, the CLOBs are different (more or less characters, newlines, etc) or one of the rows exists in only one of the instances.
--Create temporary table to store the text in
CREATE GLOBAL TEMPORARY TABLE X.TEMP_TEXT_TABLE
ON COMMIT DELETE ROWS
AS
SELECT * FROM X.TEXT_TABLE#REMOTE_DB;
--Use this statement if you need to refresh the TEMP_TEXT_TABLE table
INSERT INTO X.TEMP_TEXT_TABLE
SELECT * FROM X.TEXT_TABLE#REMOTE_DB;
--Do the comparision
SELECT DISTINCT
TARGET.NAME TARGET_NAME
,SOURCE.NAME SOURCE_NAME
,DBMS_LOB.COMPARE (TARGET.TEXT, SOURCE.TEXT) AS COMPARISON
FROM (SELECT ID, NAME, TEXT FROM X.TEMP_TEXT_TABLE) TARGET
FULL OUTER JOIN
(SELECT ID, NAME, TEXT FROM X.TEXT_TABLE) SOURCE
ON TARGET.ID = SOURCE.ID
WHERE DBMS_LOB.COMPARE (TARGET.TEXT, SOURCE.TEXT) <> 0
OR DBMS_LOB.COMPARE (TARGETTEXT, SOURCE.TEXT) IS NULL;
You can use DBMS_SQLHASH to compare the hashes of the relevant data. This should use significantly less IO than moving and comparing the CLOBs. The query below will just tell you if there are any differences in the entire table, but you can narrow it down.
select sys.dbms_sqlhash.gethash(sqltext => 'select text from text_table'
,digest_type => 1/*MD4*/) from dual
minus
select sys.dbms_sqlhash.gethash(sqltext => 'select text from text_table#remoteDB'
,digest_type => 1/*MD4*/) from dual;

Resources