Oracle12 Materialized Views only latest records - oracle

Say you have a table of car owners (car_owners) :
car_id
person_id
registration_date
...
And each time someone buy's a car there is a record inserted into this table.
Now I would like to create a materialized view that only holds the newest registration for each vehicle, that is when a record is inserted, the materialized view updates the record for this vehicle (if it exists) with the new record from the base table.
The materialized view only hold one record per car.
I tried something like this
create materialized view newest_owner
build immediately
refresh force on commit
select *
from car_owners c
where c.registration_date = (
select max(cc.registration_date)
from car_owners cc
where cc.car_id = c.car_id
);
It seems that materialized view do not like sub-selects.
Do you have any tips on how to do this or how to achieve this another way?
I have another solution for now, use triggers to update a separate table to hold the newest values, but I was hoping that materialized view could do the trick.
Thanks.

For this, you may have to use nested materialized views:
create table car_owners
(pk_col number primary key
,car_id number
,person_id number
,registration_date date
);
truncate table car_owners;
insert into car_owners
select rownum
,trunc(dbms_random.value(1,1000)) car_id
,mod(rownum,100000) person_id
,(sysdate-dbms_random.value(1,3000)) registration_date
from dual
connect by rownum <= 1000000;
commit;
exec dbms_stats.gather_Table_stats(null,'car_owners')
create materialized view log on car_owners with sequence, rowid
(car_id, registration_date) including new values;
create materialized view latest_registration
refresh fast on commit enable query rewrite
as
select c.car_id
,max(c.registration_date) max_registration_date
from car_owners c
group by c.car_id
/
create materialized view log on latest_registration with sequence, rowid
(car_id, max_registration_date) including new values;
create materialized view newest_owner
refresh fast on commit enable query rewrite
as
select c.rowid row_id,cl.rowid cl_rowid, c.pk_col, c.car_id, c.person_id, c.registration_date
from car_owners c
join latest_registration cl
on c.registration_date = cl.max_registration_date
and c.car_id = cl.car_id
/
select * from newest_owner where car_id = 25;
ROW_ID CL_ROWID PK_COL CAR_ID PERSON_ID REGISTRAT
------------------ ------------------ ---------- ---------- ---------- ---------
AAAUreAAMAAD/IxABS AAAUriAAMAAD+TNACE 644158 25 44158 09-APR-22
insert into car_owners values (1000001, 25,-1,sysdate);
commit;
select * from newest_owner where car_id = 25;
ROW_ID CL_ROWID PK_COL CAR_ID PERSON_ID REGISTRAT
------------------ ------------------ ---------- ---------- ---------- ---------
AAAUreAAMAAD/pLAB1 AAAUriAAMAAD+TNACE 1000001 25 -1 22-APR-22
explain plan for
select c.rowid row_id,cl.rowid cl_rowid, c.pk_col, c.car_id, c.person_id, c.registration_date
from car_owners c
join latest_registration cl
on c.registration_date = cl.max_registration_date
and c.car_id = cl.car_id
/
select * from dbms_xplan.display();
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 999 | 41958 | 3 (0)| 00:00:01 |
| 1 | MAT_VIEW REWRITE ACCESS FULL| NEWEST_OWNER | 999 | 41958 | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------
Have a read of the docs: https://docs.oracle.com/en/database/oracle/oracle-database/21/dwhsg/basic-materialized-views.html#GUID-E087FDD0-B08C-4878-BBA9-DE56A705835E https://docs.oracle.com/en/database/oracle/oracle-database/21/dwhsg/basic-materialized-views.html#GUID-179C8C8A-585B-49E6-8970-09396DB53DE3 there are some restrictions that can slow down your refreshes (eg deletes).

Related

oracle index in the underlying sql query

I have a table with 2 columns
item_name varchar2
brand varchar2
both of them have bitmap index
let's say I create a view for a specific brand and rename the column item_name ,something like that
create view my_brand as
select item_name as item from table x where brand='x'
We cannot create an index on a normal view but what is Oracle doing when issuing the underlying query of that view? Is the index of the item_name column being used if we write select item from my_brand where item='item1'?
thanks
The answer will be “it depends”. The index access path is certainly an option open to the optimizer; but remember that the optimizer makes a cost based decision. So essentially it will evaluate the cost of all the available plans and choose the one with the lowest cost.
Here is an example:
create table tab1 ( item_name varchar2(15), brand varchar2(15) );
insert into tab1
select 'Name '||to_char( rownum), 'Brand '||to_char(mod(rownum,10))
from dual
connect by rownum < 1000000
commit;
exec dbms_stats.gather_table_stats( user, 'TAB1' );
create bitmap index bm1 on tab1 ( item_name );
create bitmap index bm2 on tab1 ( brand );
create or replace view my_brand
as select item_name as item from tab1 where brand = 'Brand 1';
explain plan for
select item from my_brand where item = 'Name 1001'
select * from table( dbms_xplan.display )
--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 20 | 3 (0)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| TAB1 | 1 | 20 | 3 (0)| 00:00:01 |
| 2 | BITMAP CONVERSION TO ROWIDS | | | | | |
|* 3 | BITMAP INDEX SINGLE VALUE | BM1 | | | | |
--------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("BRAND"='Brand 1')
3 - access("ITEM_NAME"='Name 1001')
We can create an index on a normal view
No, you can't
SQL> create table as_idx_view (item_name varchar2(10), brand varchar2(10));
Table created.
SQL> create view as_view as select item_name item from as_idx_view where brand = 'X';
View created.
SQL> create index as_idx_view_idx1 on as_view (item);
create index as_idx_view_idx1 on as_view (item)
*
ERROR at line 1:
ORA-01702: a view is not appropriate here

Vertica Table Analysis

I would like to analyze table usage on Verica to check the following
the tables that are hit most be queries
tables that are getting more write queries
tables that are getting more read queries.
So I am asking for help for SQL query or if anyone has any documents please point me in right direction. Thank you.
Here, I create a function QTYPE() that assigns a request of type 'QUERY' to either a SELECT, an INSERT, or a MODIFY (meaning DELETE,UPDATE,MERGE). The differentiation comes from the fact that, in Vertica, UPDATE/MERGE are actually DELETEs, then INSERTs.
I use two regular expressions of a certain complexity: first, finding [schema.]tablename after a JOIN or FROM keyword, then finding [schema.]tablename after either the UPDATE, the INSERT INTO, the MERGE INTO and the DELETE FROM keywords. Then, I join back to the tables system table to a) only select the tables really existing and b) add the schema name if it is missing.
The final report would be:
qtype | tbname | tx_count
--------+------------------------------------------------------+----------
INSERT | dbadmin.nrm_cpustats_rate | 74
INSERT | dbadmin.v_poll_item | 39
INSERT | dbadmin.child | 32
INSERT | dbadmin.tbid | 32
INSERT | dbadmin.etl_group_membership | 12
INSERT | dbadmin.sensor_oco | 11
INSERT | webanalytics.webtraffic_part | 10
INSERT | webanalytics.webtraffic_new_design_platform_datadate | 9
MODIFY | cp.foo | 2
MODIFY | public.foo | 2
MODIFY | taboola_tests.foo | 2
SELECT | dbadmin.flext | 112
SELECT | dbadmin.children | 112
SELECT | dbadmin.ffoo | 112
SELECT | dbadmin.demovals | 112
SELECT | dbadmin.allbut4 | 112
SELECT | dbadmin.allcols | 112
SELECT | dbadmin.allbut1 | 112
SELECT | dbadmin.flx | 112
Here's the function definition, and the CREATE TABLE statement to collect the statistics of what you're looking for, and finally the query getting the 'hit parade' of the most touched tables ...
Mind you, it might become a long runner with a lot of history in your query_requests table ...
CREATE OR REPLACE FUNCTION qtype(sql VARCHAR(64000))
RETURN VARCHAR(8) AS BEGIN
RETURN
CASE UPPER(REGEXP_SUBSTR(sql,'\w+')::VARCHAR(16))
WHEN 'SELECT' THEN 'SELECT'
WHEN 'WITH' THEN 'SELECT'
WHEN 'AT' THEN 'SELECT'
WHEN 'INSERT' THEN 'INSERT'
WHEN 'DELETE' THEN 'MODIFY'
WHEN 'UPDATE' THEN 'MODIFY'
WHEN 'MERGE' THEN 'MODIFY'
ELSE UPPER(REGEXP_SUBSTR(sql,'\w+')::VARCHAR(16))
END
;
END;
DROP TABLE IF EXISTS table_op_stats;
CREATE TABLE table_op_stats AS
WITH
-- need 1000 integers - up to ~400 source tables found in 1 select
i(i) AS (
SELECT MICROSECOND(tm)
FROM (
SELECT TIMESTAMPADD(MICROSECOND, 1,'2000-01-01'::TIMESTAMP)
UNION ALL SELECT TIMESTAMPADD(MICROSECOND,1000,'2000-01-01'::TIMESTAMP)
) l(ts)
TIMESERIES tm AS '1 MICROSECOND' OVER(ORDER BY ts)
)
,
tblist AS (
-- selects can affect several types, found by JOIN or FROM keyword before
-- hence look_behind regular expression
SELECT
QTYPE(request) AS qtype
, transaction_id
, statement_id
, i
, LTRIM(REGEXP_SUBSTR(request,'(?<=(from|join))\s+(\w+\.)?\w+\b',1,i,'i')) as tbname
FROM query_requests CROSS JOIN i
WHERE request_type='QUERY'
AND success
AND LTRIM(REGEXP_SUBSTR(request,'(?<=(from|join))\s+(\w+\.)?\w+\b',1,i,'i')) <> ''
UNION ALL
-- insert/delete/update/merge queries only affect one table each
SELECT
QTYPE(request) AS qtype
, transaction_id
, statement_id
, 1 AS i
, LTRIM(REGEXP_SUBSTR(request,'(insert\s+.*into\s+|update\s+.*|merge\s+.*into|delete\s+.*from)\s*((\w+\.)?\w+)\b',1,1,'i',2)) as tbname
FROM query_requests
WHERE request_type='QUERY'
AND success
AND QTYPE(request) <> 'SELECT'
)
,
-- join back to the "tables" system table - removes queries from correlation names, and adds schema name if needed
real_tables AS (
SELECT
qtype
, transaction_id
, statement_id
, i
, CASE WHEN SPLIT_PART(tbname,'.',2)=''
THEN table_schema||'.'||tbname
ELSE tbname
END AS tbname
FROM tblist
JOIN tables ON CASE WHEN SPLIT_PART(tbname,'.',2)=''
THEN tbname=table_name
ELSE SPLIT_PART(tbname,'.',1)=table_schema AND SPLIT_PART(tbname,'.',2)=table_name
END
)
SELECT
qtype
, transaction_id
, statement_id
, i
, tbname
FROM real_tables;
-- Time: First fetch (0 rows): 42483.769 ms. All rows formatted: 42484.324 ms
-- the query at the end:
WITH grp AS (
SELECT
qtype
, tbname
, COUNT(*) AS tx_count
FROM table_op_stats
GROUP BY 1,2
)
SELECT
*
FROM grp
LIMIT 8 OVER(
PARTITION BY qtype
ORDER BY tx_count DESC
);

Materialised View : Aggregate query involving partitionn operator

Ran into a wall, trying to insert into Tableb with rows that meet a certain condition after insertion into Tablea
drop materialized view mv ;
drop materialized view log on tablea ;
create materialized view log on tablea
with rowid, sequence ( tino, price )
including new values;
create materialized view mv
refresh fast on commit
enable query rewrite
as
SELECT tino,sum(price)
FROM tablea PARTITION(PART_201609)
group by tino;
The above will return the ORA-12054 : cannot set ON COMMIT Refresh attribute.
Is this a limitation ? Aggregate query not having a partition operator in it?
Table is too larger, i would want my view to only have data specific to a certain period/month
When i remove the PARTITION(PART_201609) and ran as below i was able to successfully create the view :
create materialized view mv
refresh fast on commit
enable query rewrite
as
SELECT tino,sum(price)
FROM tablea
group by tino;
-- EDIT -- Include tablea's DDL
-- Create table
create table TABLEA
(
tino NUMBER not null,
price VARCHAR2(200),
dated DATE
)
partition by range (DATED)
(
partition PART_201608 values less than (TO_DATE(' 2016-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
partition PART_201609 values less than (TO_DATE(' 2016-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
partition PART_201610 values less than (TO_DATE(' 2016-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')));
You can use the dbms_mview.explain_mview procedure to see why your proposed query can't be used for a fast-refresh-on-commit MV:
begin
dbms_mview.explain_mview(q'[SELECT tino,sum(price)
FROM tablea PARTITION(PART_201609)
group by tino]');
end;
/
select capability_name, possible, msgno, msgtxt from mv_capabilities_table;
CAPABILITY_NAME P MSGNO MSGTXT
------------------------------ - ---------- ------------------------------------------------------------------------------------------
PCT N
REFRESH_COMPLETE Y
REFRESH_FAST N
REWRITE Y
PCT_TABLE N 2067 no partition key or PMARKER or join dependent expression in select list
REFRESH_FAST_AFTER_INSERT N 2169 the materialized view contains partition extended table name
REFRESH_FAST_AFTER_ONETAB_DML N 2143 SUM(expr) without COUNT(expr)
REFRESH_FAST_AFTER_ONETAB_DML N 2146 see the reason why REFRESH_FAST_AFTER_INSERT is disabled
REFRESH_FAST_AFTER_ANY_DML N 2161 see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
REFRESH_FAST_PCT N 2157 PCT is not possible on any of the detail tables in the materialized view
REWRITE_FULL_TEXT_MATCH Y
REWRITE_PARTIAL_TEXT_MATCH Y
REWRITE_GENERAL N 2169 the materialized view contains partition extended table name
REWRITE_PCT N 2158 general rewrite is not possible or PCT is not possible on any of the detail tables
PCT_TABLE_REWRITE N 2185 no partition key or PMARKER in select list
As far as I'm aware there isn't any way around the the 2169 error:
02169, 00000, "the materialized view contains partition extended table name"
// *Cause: Fast refresh of materialized aggregate views and/or materialized
// join views are not supported if they were defined using partition
// extended table names.
// *Action: Create the fast refreshable materialized view without using
// partition extended table names or create the materialized view as
// a complete refresh materialized view.
Specifying the partition by name is somewhat unusual anyway; you can achieve the same thing by specifying the date range, and Oracle will limit the query to the relevant partition anyway. You get the same execution plan from:
explain plan for
select tino, sum(price)
from tablea partition(part_201609)
group by tino;
and
explain plan for
select tino, sum(price)
from tablea
where dated >= date '2016-09-01'
and dated < date '2016-10-01'
group by tino;
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 115 | 15 (7)| 00:00:01 | | |
| 1 | HASH GROUP BY | | 1 | 115 | 15 (7)| 00:00:01 | | |
| 2 | PARTITION RANGE SINGLE| | 1 | 115 | 14 (0)| 00:00:01 | 2 | 2 |
| 3 | TABLE ACCESS FULL | TABLEA | 1 | 115 | 14 (0)| 00:00:01 | 2 | 2 |
--------------------------------------------------------------------------------------------------
You'll see a higher row count than I get from my dummy table, but note the PSTART and PSTOP columns.
Using that for your MV still isn't quite enough though:
begin
dbms_mview.explain_mview(q'[select tino, sum(price)
from tablea
where dated >= date '2016-09-01'
and dated < date '2016-10-01'
group by tino]');
end;
/
select capability_name, possible, msgno, msgtxt from mv_capabilities_table;
CAPABILITY_NAME P MSGNO MSGTXT
------------------------------ - ---------- ------------------------------------------------------------------------------------------
PCT N
REFRESH_COMPLETE Y
REFRESH_FAST N
REWRITE Y
PCT_TABLE N 2067 no partition key or PMARKER or join dependent expression in select list
REFRESH_FAST_AFTER_INSERT N 2081 mv log does not have all necessary columns
REFRESH_FAST_AFTER_ONETAB_DML N 2143 SUM(expr) without COUNT(expr)
REFRESH_FAST_AFTER_ONETAB_DML N 2146 see the reason why REFRESH_FAST_AFTER_INSERT is disabled
REFRESH_FAST_AFTER_ONETAB_DML N 2142 COUNT(*) is not present in the select list
REFRESH_FAST_AFTER_ONETAB_DML N 2143 SUM(expr) without COUNT(expr)
REFRESH_FAST_AFTER_ANY_DML N 2161 see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
REFRESH_FAST_PCT N 2157 PCT is not possible on any of the detail tables in the materialized view
REWRITE_FULL_TEXT_MATCH Y
REWRITE_PARTIAL_TEXT_MATCH Y
REWRITE_GENERAL Y
REWRITE_PCT N 2158 general rewrite is not possible or PCT is not possible on any of the detail tables
PCT_TABLE_REWRITE N 2185 no partition key or PMARKER in select list
You need to resolve the 2067 error:
02067, 00000, "no partition key or PMARKER or join dependent expression in select list"
// *Cause: The capability in question is not supported when the materialized
// view unless the select list (and group by list if a GROUP BY
// clause is present) includes the partition key or
// PMARKER function reference to the table in question or an expression
// join dependent on the partitioning column of the table in question.
// *Action: Add the partition key or a PMARKER function reference or a join dependent
// expression to the select list (and the GROUP BY clause, if present).
... which is related to partition change tracking. You can add a partition marker to the select list and group-by, which again gets the same execution plan (PSTART/PSTOP), but now allows fast-refresh:
explain plan for
select dbms_mview.pmarker(rowid), tino, sum(price)
from tablea
where dated >= date '2016-09-01'
and dated < date '2016-10-01'
group by dbms_mview.pmarker(rowid), tino;
select * from table(dbms_xplan.display);
begin
dbms_mview.explain_mview(q'[select dbms_mview.pmarker(rowid), tino, sum(price)
from tablea
where dated >= date '2016-09-01'
and dated < date '2016-10-01'
group by dbms_mview.pmarker(rowid), tino]');
end;
/
CAPABILITY_NAME P MSGNO MSGTXT
------------------------------ - ---------- ------------------------------------------------------------------------------------------
PCT Y
REFRESH_COMPLETE Y
REFRESH_FAST Y
REWRITE Y
PCT_TABLE Y
REFRESH_FAST_AFTER_INSERT N 2081 mv log does not have all necessary columns
REFRESH_FAST_AFTER_ONETAB_DML N 2143 SUM(expr) without COUNT(expr)
REFRESH_FAST_AFTER_ONETAB_DML N 2146 see the reason why REFRESH_FAST_AFTER_INSERT is disabled
REFRESH_FAST_AFTER_ONETAB_DML N 2142 COUNT(*) is not present in the select list
REFRESH_FAST_AFTER_ONETAB_DML N 2143 SUM(expr) without COUNT(expr)
REFRESH_FAST_AFTER_ANY_DML N 2161 see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
REFRESH_FAST_PCT Y
REWRITE_FULL_TEXT_MATCH Y
REWRITE_PARTIAL_TEXT_MATCH Y
REWRITE_GENERAL Y
REWRITE_PCT Y
PCT_TABLE_REWRITE Y
And you can indeed use that query to create your MV:
create materialized view mv
refresh fast on commit
enable query rewrite
as
select dbms_mview.pmarker(rowid), tino, sum(price)
from tablea
where dated >= date '2016-09-01'
and dated < date '2016-10-01'
group by dbms_mview.pmarker(rowid), tino;
Materialized view MV created.
If you want to enable all capabilities in the MV you can add the dated column to your MV log:
create materialized view log on tablea
with rowid, sequence ( dated, tino, price )
including new values;
and include the missing aggregates in your MV query:
select dbms_mview.pmarker(rowid), tino, sum(price), count(price), count(*)
from tablea a
where dated >= date '2016-09-01'
and dated < date '2016-10-01'
group by dbms_mview.pmarker(rowid), tino
Not relevant, but also note that you can, if it's beneficial, partition the MV view log too:
create materialized view log on tablea
partition by range (dated)
(
partition PART_201608 values less than (TO_DATE(' 2016-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
partition PART_201609 values less than (TO_DATE(' 2016-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')),
partition PART_201610 values less than (TO_DATE(' 2016-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
)
with rowid, sequence ( dated, tino, price )
including new values;

Delete the same set of values from multiple tables

I would like to not have to repeat the same subquery over and over for all tables.
Example:
begin
-- where subquery is quite complex and returns thousands of records of a single ID column
delete from t1 where exists ( select 1 from subquery where t1.ID = subquery.ID );
delete from t2 where exists ( select 1 from subquery where t2.ID = subquery.ID );
delete from t3 where exists ( select 1 from subquery where t3.ID = subquery.ID );
end;
/
An alternative I've found is:
declare
type id_table_type is table of table.column%type index by PLS_INTEGER
ids id_table_type;
begin
select ID
bulk collect into ids
from subquery;
forall indx in 1 .. ids.COUNT
delete from t1 where ID = ids(indx);
forall indx in 1 .. ids.COUNT
delete from t2 where ID = ids(indx);
forall indx in 1 .. ids.COUNT
delete from t3 where ID = ids(indx);
end;
/
What are your thoughts about this alternative? is there a more efficient way of doing this?
Create a temporary table, once, to hold the results of the subquery.
For each run, insert the results of the subquery into the temporary table. The subquery only runs once and each delete is simple: delete from mytable t where t.id in (select id from tmptable);.
Truncate the table when finished.
If you could do it in pure SQL than do it in SQL, no need of PL/SQL. With every SQL call in PL/SQL(or vice-versa, but less in this case) there is an overhead associated with each context switch between the two engines.
Now, having said that, if you must do it in PL/SQL, then, it is possible to reduce context switches by bulk binding the whole collection to the DML statement in one operation.
Usually a cursor for loop does an implicit bulk collect limit 100 which is much better than an explicit cursor.
But, it's not just about bulk collecting, we are dealing with the operations we would subsequently do on the array that we have fetched incrementally. We could further improve the performance by using FORALL statement along with BULK COLLECT.
IMO, the best would be do it in pure SQL. If you really want to do it in PL/SQL, then do it as I mentioned above.
I would go with the SQL approach, and since you have the same subquery repeated, I would use QUERY RESULT CACHE. Oracle 11g introduced the QUERY RESULT CACHE.
In your subquery:
SELECT /*+ RESULT_CACHE */ <column_list> .. <your subquery>...
For example,
SQL> EXPLAIN PLAN FOR
2 SELECT /*+ RESULT_CACHE */
3 deptno,
4 AVG(sal)
5 FROM emp
6 GROUP BY deptno;
Explained.
Let's look at the plan table output:
SQL> SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------
Plan hash value: 4067220884
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3 | 21 | 3 (0)| 00:00:01 |
| 1 | RESULT CACHE | b9aa181887ufz5341w1zqpf1d1 | | | | |
| 2 | HASH GROUP BY | | 3 | 21 | 3 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL| EMP | 14 | 98 | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------
Result Cache Information (identified by operation id):
------------------------------------------------------
1 - column-count=2; dependencies=(SCOTT.EMP); name="SELECT /*+ RESULT_CACHE */
deptno,
AVG(sal)
FROM emp
GROUP BY deptno"
15 rows selected.
An alternative would be:
begin
for s in (select distinct id from subquery) loop
delete from t1 where t1.id = s.id;
delete from t2 where t2.id = s.id;
delete from t3 where t3.id = s.id;
end loop;
end;
/
In general this is a questionable technique, as processing "row-by-row" is slower than doing multi-row operations. But if the point is that subquery is very slow, this may be an improvement.
There seems to be a dependency between tables t1, t2 and t3 on its id field.
Probably you can solve this by a delete trigger on t1 to remove entries in t2and t3 as well.
I don't know exactly the affected tables of your subquery, maybe you can set an update flag (e.g. a datefield) on a further table t0 and delete entries on t1, t2 and t3 by an update trigger on table t0 if datefield changes.
The advantage of this solution is database consistency and pure sql.
I think it is not possible to use delete on multiple tables at a time.
But,one way could be to use the sub query factoring (with clause) .
As of now,i don't have exact answer but i think with clause might help.I will get back with exact query once i am able to devote some time.
Also,in case you have a requirement of deleting the sub query id from sub query tables as well on daily basis ,then it will be easier in sense,you can create foreign key based on on delete cascade.
Hope it helps .

Oracle Index with multiple Columns querying on single column

In a table in our Oracle installation we have a table with an index on two of the columns (X and Y). If I do a query on the table with a where clause only touching column X, will Oracle be able to use the index?
For example:
Table Y:
Col_A,
Col_B,
Col_C,
Index exists on (Col_A, Col_B)
SELECT * FROM Table_Y WHERE Col_A = 'STACKOVERFLOW';
Will the index be used, or will a table scan be done?
It depends.
You could check it by letting Oracle explain the execution plan:
EXPLAIN PLAN FOR
SELECT * FROM Table_Y WHERE Col_A = 'STACKOVERFLOW';
and then
select * from table(dbms_xplan.display);
So, for example with
create table table_y (
col_a varchar2(30),
col_b varchar2(30),
col_c varchar2(30)
);
create unique index table_y_ix on table_y (col_a, col_b);
and then a
explain plan for
select * from table_y
where col_a = 'STACKOVERFLOW';
select * from table(dbms_xplan.display);
The plan (on my installation) looks like:
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 51 | 1 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| TABLE_Y | 1 | 51 | 1 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | TABLE_Y_IX | 1 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("COL_A"='STACKOVERFLOW')
ID 2 shows you, that the index TABLE_Y_IX is indeed used for an index range scan.
If on another installation Oracle chooses to use the index is dependend on many things. It's Oracle's query optimizer that makes this decision.
Update If you feel you're be better off (performance wise, that is) if Oracle used the index, you might want to try the + index_asc(...) (see index hint)
So in your case that would be something like
SELECT /*+ index_asc(TABLE_Y TABLE_Y_IX) */ *
FROM Table_Y
WHERE Col_A = 'STACKOVERFLOW';
Additionally, I would ensure that you have gathered statistics on the table and its columns. You can check the date of the last gathering of statistics with a
select last_analyzed from dba_tables where table_name = 'TABLE_Y';
and
select column_name, last_analyzed from dba_tab_columns where table_name = 'TABLE_Y';
If there are no statistics or if they're stale, make yourself familiar with the dbms_stats package to gather such statistics.
These statistics are the data that the query optimizer relies on heavily to make its decisions.

Resources