How to set ALL_ROWS for a materialized view in Oracle? - oracle

How can I make sure that a materialized view is refreshed using the optimizer_mode = ALL_ROWS?
Background: I'm migrating an mview from an ALL_ROWS database to a FIRST_ROWS database and don't want to lose the setting, as the refresh takes orders of magnitude longer with FIRST_ROWS / nested loops compared to ALL_ROWS / hash joins.
The mview is build on top of a view, and a couple of similar mviews are refreshed in a PL/SQL procedure.
I've tried out some minimal examples, it looks like the precedence is
Hint /*+ ALL_ROWS */ in materialized view
If that is not set, a hint in the view is observed
If neither mview nor view have such a hint, the session setting is observed
Is this correct?
I've tried out thee views without and with hints:
CREATE OR REPLACE VIEW v_def AS SELECT /*+ */ * FROM all_objects;
CREATE OR REPLACE VIEW v_all AS SELECT /*+ ALL_ROWS */ * FROM all_objects;
CREATE OR REPLACE VIEW v_first AS SELECT /*+ FIRST_ROWS */ * FROM all_objects;
And five mviews without and with hints:
CREATE MATERIALIZED VIEW mv_def BUILD DEFERRED AS SELECT /*+ */ * FROM v_def;
CREATE MATERIALIZED VIEW mv_all BUILD DEFERRED AS SELECT /*+ */ * FROM v_all;
CREATE MATERIALIZED VIEW mv_first BUILD DEFERRED AS SELECT /*+ */ * FROM v_first;
CREATE MATERIALIZED VIEW mv_all_first BUILD DEFERRED AS SELECT /*+ ALL_ROWS */ * FROM v_first;
CREATE MATERIALIZED VIEW mv_first_all BUILD DEFERRED AS SELECT /*+ FIRST_ROWS */ * FROM v_all;
When I refresh the mviews with a procedure ...
CREATE OR REPLACE PROCEDURE p_def IS
BEGIN
dbms_mview.refresh('mv_def', atomic_refresh=>FALSE);
dbms_mview.refresh('mv_all', atomic_refresh=>FALSE);
dbms_mview.refresh('mv_first', atomic_refresh=>FALSE);
dbms_mview.refresh('mv_all_first', atomic_refresh=>FALSE);
dbms_mview.refresh('mv_first_all', atomic_refresh=>FALSE);
END;
/
CREATE OR REPLACE PROCEDURE p_all IS
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION SET optimizer_mode = ALL_ROWS';
p_def;
END;
/
CREATE OR REPLACE PROCEDURE p_first IS
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION SET optimizer_mode = FIRST_ROWS';
p_def;
END;
/
... I get the following results:
mview p_dev p_all p_first
------------- ---------- ---------- ----------
MV_DEF first_rows all_rows first_rows
MV_ALL all_rows all_rows all_rows
MV_FIRST first_rows first_rows first_rows
MV_ALL_FIRST all_rows all_rows all_rows
MV_FIRST_ALL first_rows first_rows first_rows
The setting of optimizer_mode came from the query:
SELECT e.value as optimizer_mode, c.sql_id, substr(c.sql_text,1,100) as sql
FROM v$sql c
LEFT JOIN v$sql_optimizer_env e
ON e.sql_id = c.sql_id ANd e.name = 'optimizer_mode'
WHERE regexp_like(c.sql_text, 'BYPASS.*(v_def|v_all|v_first)');
So, I need to protect the mviews from the database setting FIRST_ROWS, right?
I can do this either in the PL/SQL procedure that refreshes the mviews with an ALTER SESSION statement, hoping that nobody else will ever refresh the mviews directly. Or I change the queries of the mviews, adding a hint /*+ ALL_ROWS */, right?

You are correct, the precedence for optimizer settings, in order of increasing priority, is:
System parameter
Session setting
Query block level hint
Statement level or parent query block level hint
If the /*+ ALL ROWS*/ hint is set in the outer-most query, that hint will override other hints and settings.
How do we prove the precedence rules are true?
Although I can't find a clear reference to the above rules in the official documentation, most of the the rules are fairly obvious. The first three rules make sense and we've probably all seen them in action before. It makes sense that configuration is set and applied at a high-level and then optionally overridden at a more low-level: first for the entire system, then for a specific session, and finally for a single query.
The unusual precedence is the last one, where a high-level statement hint overrides a low-level query block hint. Luckily, we can use the 19c hint report to demonstrate that this rule is true.
Simple hint test
The following test case shows the FIRST_ROWS hint being used and showing up in the "Hint Report" section of the execution plan.
--drop table test_table;
create table test_table(a number);
explain plan for select /*+ first_rows */ * from test_table;
select * from table(dbms_xplan.display(format => 'basic +hint_report'));
Plan hash value: 3979868219
----------------------------------------
| Id | Operation | Name |
----------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS FULL| TEST_TABLE |
----------------------------------------
Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1
---------------------------------------------------------------------------
0 - STATEMENT
- first_rows
Parent hint overrides child hint
Although you've already created a test case where an ALL_ROWS hint in the parent query block overrides the FIRST_ROWS hint in a child block, the following test case makes it clear that that behavior is not just an accident. The "Hint Report" very clearly explains that "first_rows / hint overridden by another in parent query block".
explain plan for select /*+ all_rows */ * from (select /*+ first_rows */ * from test_table);
select * from table(dbms_xplan.display(format => 'basic +hint_report'));
Plan hash value: 3979868219
----------------------------------------
| Id | Operation | Name |
----------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | TABLE ACCESS FULL| TEST_TABLE |
----------------------------------------
Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 2 (U - Unused (1))
---------------------------------------------------------------------------
0 - STATEMENT
U - first_rows / hint overridden by another in parent query block
- all_rows
While this answer isn't a definitive proof of the behavior, I believe that it is sufficient evidence for us to feel confident that using the ALL_ROWS hint in the outer-most query will always work.

Related

View Performance

I have requirement of performing some calculation on a column of a table with large date set ( 300 GB). and return that value.
Basically I need to create a View on that table. Table has data of 21 years and It is partitioned on date column (Daily). We can not put date condition on View's query and User will put filter on runtime while execution of the view.
For example:
Create view v_view as
select * from table;
Noe I want to query View like
Select * v_view where ts_date between '1-Jan-19' and '1-Jan-20'
How Internally Oracle execute above statement? Will it execute view query first and then put date filter on that?
If so will there not be performance issue ? and how to resolve this?
oracle first generates the view and then applies the filter. you can create a function that input may inserted by user. the function results a create query and if yo run the query then the view will be created. just run:
create or replace function fnc_x(where_condition in varchar2)
return varchar2
as
begin
return ' CREATE OR REPLACE VIEW sup_orders AS
SELECT suppliers.supplier_id, orders.quantity, orders.price
FROM suppliers
INNER JOIN orders
ON suppliers.supplier_id = orders.supplier_id
'||where_condition||' ';
end fnc_x;
this function should be run. input the function is a string like this:
''WHERE suppliers.supplier_name = Microsoft''
then you should run a block like this to run the function's result:
cl scr
set SERVEROUTPUT ON
declare
szSql varchar2(3000);
crte_vw varchar2(3000);
begin
szSql := 'select fnc_x(''WHERE suppliers.supplier_name = Microsoft'') from dual';
dbms_output.put_line(szSql);
execute immediate szSql into crte_vw; -- generate 'create view' command that is depended on user's where_condition
dbms_output.put_line(crte_vw);
execute immediate crte_vw ; -- create the view
end;
In this manner, you just need received where_condition from user.
Oracle can "push" the predicates inside simple views and can then use those predicates to enable partition pruning for optimal performance. You almost never need to worry about what Oracle will run first - it will figure out the optimal order for you. Oracle does not need to mindlessly build the first step of a query, and then send all of the results to the second step. The below sample schema and queries demonstrate how only the minimal amount of partitions are used when a view on a partitioned table is queried.
--drop table table1;
--Create a daily-partitioned table.
create table table1(id number, ts_date date)
partition by range(ts_date)
interval (numtodsinterval(1, 'day'))
(
partition p1 values less than (date '2000-01-01')
);
--Insert 1000 values, each in a separate day and partition.
insert into table1
select level, date '2000-01-01' + level
from dual
connect by level <= 1000;
--Create a simple view on the partitioned table.
create or replace view v_view as select * from table1;
The following explain plan shows "Pstart" and "Pstop" set to 3 and 4, which means that only 2 of the many partitions are used for this query.
--Generate an explain plan for a simple query on the view.
explain plan for
select * from v_view where ts_date between date '2000-01-02' and date '2000-01-03';
--Show the explain plan.
select * from table(dbms_xplan.display(format => 'basic +partition'));
Plan hash value: 434062308
-----------------------------------------------------------
| Id | Operation | Name | Pstart| Pstop |
-----------------------------------------------------------
| 0 | SELECT STATEMENT | | | |
| 1 | PARTITION RANGE ITERATOR| | 3 | 4 |
| 2 | TABLE ACCESS FULL | TABLE1 | 3 | 4 |
-----------------------------------------------------------
However, partition pruning and predicate pushing do not always work when we may think they should. One thing we can do to help the optimizer is to use date literals instead of strings that look like dates. For example, replace
'1-Jan-19' with date '2019-01-01'. When we use ANSI date literals, there is no ambiguity and Oracle is more likely to use partition pruning.

Oracle WITH and MATERIALIZE hint acts as autonomous transaction for functions

In Oracle 12c if I call a function in a query that uses MATERIALIZE hint in a WITH..AS section, the function call acts like an autonomous transaction:
DROP TABLE my_table;
CREATE TABLE my_table (
my_column NUMBER
);
-- Returns number of records in table
CREATE OR REPLACE FUNCTION my_function
RETURN INTEGER
IS
i INTEGER;
BEGIN
SELECT COUNT(1) INTO i FROM my_table;
RETURN i;
END;
/
-- Inserts one record to table
INSERT INTO my_table (my_column) VALUES (9);
-- Returns number of records in table. This works correctly, returns 1
SELECT COUNT(1) AS "use simple select" FROM my_table;
-- Returns number of records in table. This works correctly, returns 1
WITH x AS (
SELECT /*+ MATERIALIZE */ COUNT(1) AS "use WITH, MATERIALIZE" FROM my_table
)
SELECT * FROM x;
-- Returns number of records in table. This works correctly, returns 1
SELECT my_function AS "use FUNCTION" FROM dual;
-- Returns number of records in table. This works INCORRECTLY, returns 0.
-- Function is called in autonomous transaction?
WITH x AS (
SELECT /*+ MATERIALIZE */ my_function "use WITH,MATERIALIZE,FUNCTION" FROM dual
)
SELECT * FROM x;
ROLLBACK;
Does anyone know what is the reason for this? Is it an Oracle bug or it is intended to work like this? (Why?)
Why it works like this only when WITH is combined with MATERIALIZED hint and FUNCTION call?
This looks like bug 15889476, "Wrong results with cursor-duration temp table and function running on an active transaction"; and 13253977 "Wrong results or error with cursor-duration temp table and PLSQL function running on an active transaction".
I can reproduce on 11.2.0.3 but not 11.2.0.4; and from Husqvik's comment it doesn't seem to reproduce on 12.1.0.2. That aligns with the affected version and fix-first-included-in information in the bug documents.
See MOS documents 15889476.8 and 13253977.8 for more information. You may need to contact Oracle Support to confirm this is the issue you are seeing, but it looks pretty similar.

Can't set the ON COMMIT refresh attribute when creating a materialized view containing partial primary key in Oracle

I need to extract the unique values of a column which is part of the primary key from a table into a materialized view. I can create the materialized view if using "refresh complete" but with no luck when trying to use "refresh fast on commit". Can anyone point out whether I missed anything or Oracle does not support such action.
The example output is listed below. Thanks.
SQL> create table TEST( col1 number, col2 number, col3 varchar(32), CONSTRAINT test_pk Primary Key (col1, col2));
Table created.
SQL> create materialized view test_mv build immediate refresh fast on commit as select distinct col2 from test;
create materialized view test_mv build immediate refresh fast on commit as select distinct col2 from test
*
ERROR at line 1:
ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view
SQL> create materialized view test_mv build immediate refresh complete as select distinct col2 from test;
Materialized view created.
SQL> drop materialized view test_mv;
Materialized view dropped.
SQL> create materialized view log on test;
Materialized view log created.
SQL> create materialized view test_mv build immediate refresh fast on commit as select distinct col2 from test;
create materialized view test_mv build immediate refresh fast on commit as select distinct col2 from test
*
ERROR at line 1:
ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view
Main issue of your view is the DISTINCT clause. On commit fast refresh is super sensitive to underlying query. There exist many rules that must be fulfilled for a materialized view to support fast refresh. DISTINCT prevents it.
You can check the capabilities of a materialized view using DBMS_MVIEW.EXPLAIN_MVIEW procedure:
DECLARE
result SYS.EXPLAINMVARRAYTYPE := SYS.EXPLAINMVARRAYTYPE();
BEGIN
DBMS_MVIEW.EXPLAIN_MVIEW('TEST_MV', result);
FOR i IN result.FIRST..result.LAST LOOP
DBMS_OUTPUT.PUT_LINE(result(i).CAPABILITY_NAME || ': ' || CASE WHEN result(i).POSSIBLE = 'T' THEN 'Yes' ELSE 'No' || CASE WHEN result(i).RELATED_TEXT IS NOT NULL THEN ' because of ' || result(i).RELATED_TEXT END || '; ' || result(i).MSGTXT END);
END LOOP;
END;
You find more information in documentation http://docs.oracle.com/cd/B28359_01/server.111/b28313/basicmv.htm#i1007007
Fast refresh views are picky. This solution requires a materialized view log with specific properties, and a materialized view with a few extra features and a different syntax.
DISTINCT alone does not appear to be supported. But there are aggregate materialized views that support GROUP BY. If that materialized view is created with ENABLE QUERY REWRITE, Oracle can use it in a DISTINCT query. There is also an extra COUNT(*) because "COUNT(*) must always be present to guarantee all types of fast refresh."
Create table, materialized view log, and materialized view.
create table test(col1 number, col2 number, col3 varchar(32)
,constraint test_pk primary key (col1, col2));
create materialized view log on test with rowid (col2) including new values;
create materialized view test_mv
build immediate
refresh fast on commit
enable query rewrite as
select col2, count(*) total from test group by col2;
Queries can use the materialized view.
These explain plans show that the materialized view works for both a GROUP BY and a DISTINCT query.
explain plan for select col2 from test group by col2;
select * from table(dbms_xplan.display);
explain plan for select distinct col2 from test;
select * from table(dbms_xplan.display);
Plan hash value: 1627509066
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 2 (0)| 00:00:01 |
| 1 | MAT_VIEW REWRITE ACCESS FULL| TEST_MV | 1 | 13 | 2 (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Optimizer using an index not present in the current schema

CONNECT alll/all
SELECT /*+ FIRST_ROWS(25) */ employee_id, department_id
FROM hr.employees
WHERE department_id > 50;
Execution Plan
Plan hash value: 2056577954
| Id | Operation | Name | Rows | Bytes |
| 0 | SELECT STATEMENT | | 25 | 200
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 25 | 200
|* 2 | INDEX RANGE SCAN | **EMP_DEPARTMENT_IX** | |
SQL> select * from user_indexes where index_name = 'EMP_DEPARTMENT_IX';
no rows selected
NOTE: There is an index with the same name on the DEPARTMENT column of the EMPLOYEES table in some other schema. And when that index is dropped a FULL TABLE SCAN on EMPLOYEES is performed.
Can the optimizer use that other index from some other schema over here?
You're connected as user ALLL, but you're querying a table in the HR schema:
SELECT /*+ FIRST_ROWS(25) */ employee_id, department_id
FROM hr.employees
WHERE department_id > 50;
You stressed other schema in the question, but seem to have overlooked that the table you're querying is also in another schema. The employees table won't appear in user_tables either.
The index being used is associated with that table, so it's likely to be in the same HR schema. You can see it in all_indexes or dba_indexes; the optimiser will use it even if you can't see it though. And it doesn't have to be in the same schema as the table, though it usually will be; in those views you might notice separate owner and table owner columns.
The schema model would break down if you could only utilise indexes in your own schema when accessing a table in someone else's. Every user would have to create their own copies of the indexes, which would be untenable.
You don't even necessarily have to be able to see the table - if you query a view that hides the underlying table from you (so you have select privs on the view only) the index will still be used in the background. And you might not always be explicitly using the schema prefix, if there is a synonym for the table, or you change your default schema.
Try looking in SYS.INDEXES:
select * from SYS.INDEXES where IXNAME = 'EMP_DEPARTMENT_IX'
Sounds like you are not the owner of the index, as you have noted. As long as your user can access the table data, then the index should be used by the optimizer.

Identify partitions having stale statistics in a list of schema

I have 5 development schemas. And each of them have partitioned tables. We also have scripts to dynamically create partition tables (Monthly/Yearly). We have to go to DBA everytime for gathering the details over the parition tables. Our real problem is we do have a parition table with 9 partitions. Every day after a delta load operation (Updates/Deletes using a PL/SQL) also some APPEND load using SQL*Loader. This operation happens when database has the peak load. We do have some performace issues over this table.(SELECT queries)
When reported to DBA, they would say the table statistics are stale and after they do "gathering stats", magically the query works faster. I searched about this and identified some information about dynamic performance views.
So, now , I have the following Questions.
1) Can the developer generate a the list of all partitionon tables, partition name, no of records available without going to DBA?
2) Shall we identify the last analysed date of every parition
3) Also the status of the parition(index) if it usable or unusable.
Normally there is no need to identify objects that need statistics gathered. Oracle automatically gathers statistics for stale objects, unless the task has been
manually disabled. This is usually good enough for OLTP systems. Use this query to find the status of the task:
select status
from dba_autotask_client
where client_name = 'auto optimizer stats collection';
STATUS
------
ENABLED
For data warehouse systems there is also not much need to query the data dictionary for stale stats. In a data warehouse statistics need to be considered after almost
every operation. Developers need to get in the habit of always thinking about statistics after a truncate, insert, swap, etc. Eventually they will "just know" when to gather statistics.
But if you still want to see how Oracle determines if statistics are stale, look at DBA_TAB_STATISTICS and DBA_TAB_MODIFICATIONS.
Here is an example of an initial load with statistics gathering. The table and partitions are not stale.
create table test1(a number, b number) partition by list(a)
(
partition p1 values (1),
partition p2 values (2)
);
insert into test1 select 1, level from dual connect by level <= 50000;
begin
dbms_stats.gather_table_stats(user, 'test1');
dbms_stats.flush_database_monitoring_info;
end;
/
select table_name, partition_name, num_rows, last_analyzed, stale_stats
from user_tab_statistics
where table_name = 'TEST1'
order by 1, 2;
TABLE_NAME PARTITION_NAME NUM_ROWS LAST_ANALYZED STALE_STATS
---------- -------------- -------- ------------- -----------
TEST1 P1 50000 2014-01-22 NO
TEST1 P2 0 2014-01-22 NO
TEST1 50000 2014-01-22 NO
Now add a large number of rows and the statistics are stale.
begin
insert into test1 select 2, level from dual connect by level <= 25000;
commit;
dbms_stats.flush_database_monitoring_info;
end;
/
select table_name, partition_name, num_rows, last_analyzed, stale_stats
from user_tab_statistics
where table_name = 'TEST1'
order by 1, 2;
TABLE_NAME PARTITION_NAME NUM_ROWS LAST_ANALYZED STALE_STATS
---------- -------------- -------- ------------- -----------
TEST1 P1 50000 2014-01-22 NO
TEST1 P2 0 2014-01-22 YES
TEST1 50000 2014-01-22 YES
USER_TAB_MODIFICATIONS gives more specific information on table staleness.
--Stale statistics.
select user_tables.table_name, user_tab_modifications.partition_name
,inserts+updates+deletes modified_rows, num_rows, last_analyzed
,case when num_rows = 0 then null
else (inserts+updates+deletes) / num_rows * 100 end percent_modified
from user_tab_modifications
join user_tables
on user_tab_modifications.table_name = user_tables.table_name
where user_tables.table_name = 'TEST1';
TABLE_NAME PARTITION_NAME MODIFIED_ROWS NUM_ROWS LAST_ANALYZED PERCENT_MODIFIED
---------- -------------- ------------- -------- ------------- ----------------
TEST1 P2 25000 50000 2014-01-22 50
TEST1 25000 50000 2014-01-22 50
Yes, you can generate a list of partitioned tables, and a lot of related data which you would like to see, by using ALL_PART_TABLES or USER_PART_TABLES (provided you have access).
ALL_TAB_PARTITIONS can be used to get number of rows per partition, alongwith other details.
Check other views Oracle has for gathering details about partitioned tables.
I would suggest that you should analyze the tables, and possibly rebuild the indexes, every day after your data load. If your data load is affecting a lot of records in the table, and is going to affect the existing indexes, it's a good idea to proactively update the statistics for the table and index.
You can use on the system views to get this information (Check http://docs.oracle.com/cd/E18283_01/server.112/e16541/part_admin005.htm)
I had a some what similar problem and I solved it by gathering stats on stale partitions only using 11g new INCREMENTAL option.
It's the reverse approach to your problem but it might worth investigating (specifically - how oracle determines what's a "stale" partition is).
dbms_stats.set_table_prefs('DWH','FACT_TABLE','INCREMENTAL','TRUE')
I always prefer the pro active approach - meaning, gather stats on stale partition at the last step of my etl, rather then giving the developer stronger privs.
I used to query all_ tables mentioned below.
The statistics and histogram details you mention will be updated in a frequency automatically by Oracle. But when the database is busy with many loads, I have seen these operations needs to be triggered manually. We faced similar situation, so we used to force the Analyze operation after our load for critical tables. You need to have privilege for the id you use to load the table.
ANALYZE TABLE table_name PARTITION (partition_name) COMPUTE STATISTICS;
EDIT: ANALYZE no longer gather CBO stats as mentioned here
So, DBMS_STATS package has to be used.
DBMS_STATS.GATHER_TABLE_STATS (
ownname VARCHAR2,
tabname VARCHAR2,
partname VARCHAR2 DEFAULT NULL,
estimate_percent NUMBER DEFAULT to_estimate_percent_type
(get_param('ESTIMATE_PERCENT')),
block_sample BOOLEAN DEFAULT FALSE,
method_opt VARCHAR2 DEFAULT get_param('METHOD_OPT'),
degree NUMBER DEFAULT to_degree_type(get_param('DEGREE')),
granularity VARCHAR2 DEFAULT GET_PARAM('GRANULARITY'),
cascade BOOLEAN DEFAULT to_cascade_type(get_param('CASCADE')),
stattab VARCHAR2 DEFAULT NULL,
statid VARCHAR2 DEFAULT NULL,
statown VARCHAR2 DEFAULT NULL,
no_invalidate BOOLEAN DEFAULT to_no_invalidate_type (
get_param('NO_INVALIDATE')),
force BOOLEAN DEFAULT FALSE);
And until the analyze is complete, the view tables below may not produce the accurate results (Especially the last_analyzed and num_rows columns)
Note: Try replace all_ as dba_ in table names, if you have access to it, you can try them.
You can also try to get SELECT_CATALOG_ROLE for your development id you use, so that you can SELECT the data dictionary views, and this reduces the dependency over DBA over such queries.(Still DBA are the right persons for few issues!!)
Query to identify the partition table, partition name, number of rows and last Analysed date!
select
all_part.owner as schema_name,
all_part.table_name,
NVL(all_tab.partition_name,'N/A'),
all_tab.num_rows,
all_tab.last_analyzed
from
all_part_tables all_part,
all_tab_partitions all_tab
where all_part.table_name = all_tab.table_name and
all_tab.partition_name = all_tab.partition_name and
all_part.owner=all_tab.table_owner and
all_part.owner in ('SCHEMA1','SCHEMA2','SCHEMA3')
order by all_part.table_name,all_tab.partition_name;
The Below Query returns the index/table name that are UNUSABLE
SELECT INDEX_NAME,
TABLE_NAME,
STATUS
FROM ALL_INDEXES
WHERE status NOT IN ('VALID','N/A');
The Below Query returns the index/table (PARTITION) name that are UNUSABLE
SELECT INDEX_NAME,
PARTITION_NAME,
STATUS ,
GLOBAL_STATS
FROM ALL_IND_PARTITIONS
WHERE status != 'USABLE';

Resources