View Performance - oracle

I have requirement of performing some calculation on a column of a table with large date set ( 300 GB). and return that value.
Basically I need to create a View on that table. Table has data of 21 years and It is partitioned on date column (Daily). We can not put date condition on View's query and User will put filter on runtime while execution of the view.
For example:
Create view v_view as
select * from table;
Noe I want to query View like
Select * v_view where ts_date between '1-Jan-19' and '1-Jan-20'
How Internally Oracle execute above statement? Will it execute view query first and then put date filter on that?
If so will there not be performance issue ? and how to resolve this?

oracle first generates the view and then applies the filter. you can create a function that input may inserted by user. the function results a create query and if yo run the query then the view will be created. just run:
create or replace function fnc_x(where_condition in varchar2)
return varchar2
as
begin
return ' CREATE OR REPLACE VIEW sup_orders AS
SELECT suppliers.supplier_id, orders.quantity, orders.price
FROM suppliers
INNER JOIN orders
ON suppliers.supplier_id = orders.supplier_id
'||where_condition||' ';
end fnc_x;
this function should be run. input the function is a string like this:
''WHERE suppliers.supplier_name = Microsoft''
then you should run a block like this to run the function's result:
cl scr
set SERVEROUTPUT ON
declare
szSql varchar2(3000);
crte_vw varchar2(3000);
begin
szSql := 'select fnc_x(''WHERE suppliers.supplier_name = Microsoft'') from dual';
dbms_output.put_line(szSql);
execute immediate szSql into crte_vw; -- generate 'create view' command that is depended on user's where_condition
dbms_output.put_line(crte_vw);
execute immediate crte_vw ; -- create the view
end;
In this manner, you just need received where_condition from user.

Oracle can "push" the predicates inside simple views and can then use those predicates to enable partition pruning for optimal performance. You almost never need to worry about what Oracle will run first - it will figure out the optimal order for you. Oracle does not need to mindlessly build the first step of a query, and then send all of the results to the second step. The below sample schema and queries demonstrate how only the minimal amount of partitions are used when a view on a partitioned table is queried.
--drop table table1;
--Create a daily-partitioned table.
create table table1(id number, ts_date date)
partition by range(ts_date)
interval (numtodsinterval(1, 'day'))
(
partition p1 values less than (date '2000-01-01')
);
--Insert 1000 values, each in a separate day and partition.
insert into table1
select level, date '2000-01-01' + level
from dual
connect by level <= 1000;
--Create a simple view on the partitioned table.
create or replace view v_view as select * from table1;
The following explain plan shows "Pstart" and "Pstop" set to 3 and 4, which means that only 2 of the many partitions are used for this query.
--Generate an explain plan for a simple query on the view.
explain plan for
select * from v_view where ts_date between date '2000-01-02' and date '2000-01-03';
--Show the explain plan.
select * from table(dbms_xplan.display(format => 'basic +partition'));
Plan hash value: 434062308
-----------------------------------------------------------
| Id | Operation | Name | Pstart| Pstop |
-----------------------------------------------------------
| 0 | SELECT STATEMENT | | | |
| 1 | PARTITION RANGE ITERATOR| | 3 | 4 |
| 2 | TABLE ACCESS FULL | TABLE1 | 3 | 4 |
-----------------------------------------------------------
However, partition pruning and predicate pushing do not always work when we may think they should. One thing we can do to help the optimizer is to use date literals instead of strings that look like dates. For example, replace
'1-Jan-19' with date '2019-01-01'. When we use ANSI date literals, there is no ambiguity and Oracle is more likely to use partition pruning.

Related

Inserting Row Number based on existing value in the table

I have a requirement that I need to insert row number in a table based on value already present in the table. For example, the max row_nbr record in the current table is something like this:
+----------+----------+------------+---------+
| FST_NAME | LST_NAME | STATE_CODE | ROW_NBR |
+----------+----------+------------+---------+
| John | Doe | 13 | 123 |
+----------+----------+------------+---------+
Now, I need to insert more records, with given FST_NAME and LST_NAME values. ROW_NBR needs to be generated while inserting the data into table with values auto-incrementing from 123.
I can't use a sequence, as my loading process is not the only process that inserts data into this table. And I can't use a cursor as well, as due to high volume of data the TEMP space gets filled up quickly. And I'm inserting data as given below:
insert into final_table
( fst_name,lst_name,state_code)
(select * from staging_table
where state_code=13);
Any ideas how to implement this?
It sounds like other processes are finding the current maximum row_nbr value and incrementing it as they do single-row inserts in a cursor loop.
You could do something functionally similar, either finding the maximum in advance and incrementing it (if you're already running this in a PL/SQL block):
insert into final_table (fst_name, lst_name, state_code, row_nbr)
select st.*, variable_holding_maximum + rownum
from staging_table st
where st.state_code=13;
or by querying the table as part of the query, which doesn't need PL/SQL:
insert into final_table (fst_name, lst_name, state_code, row_nbr)
select st.*, (select max(row_nbr) from final_table) + rownum
from staging_table st
where st.state_code=13;
db<>fiddle
But this isn't a good solution because it doesn't prevent clashes from different processes and sessions trying to insert at the same time; but neither would the cursor loop approach, unless it is catching unique constraint errors and re-attempting with a new value, perhaps.
It would be better to use a sequence, which would be an auto-increment column but you said you can't change the table structure; and you need to let the other processes continue to work without modification. You can still do that with a sequence and trigger approach, having the trigger always set the row_nbr value form the sequence, regardless of whether the insert statement supplied a value.
If you create a sequence that starts from the current maximum, with something like:
create sequence final_seq start with <current max + 1>
or without manually finding it:
declare
start_with pls_integer;
begin
select nvl(max(row_nbr), 0) + 1 into start_with from final_table;
execute immediate 'create sequence final_seq start with ' || start_with;
end;
/
then your trigger could just be:
create trigger final_trig
before insert on final_table
for each row
begin
:new.row_nbr := final_seq.nextval;
end;
/
Then your insert ... select statement doesn't need to supply or even think about the row_nbr value, so you can leave it as you have it now (except I'd avoid select * even in that construct, and list the staging table columns explicitly); and any existing inserts that do supply the row_nbr don't need to be modified and the value they supply will just be overwritten from the sequence.
db<>fiddle showing inserts with and withouth row_nbr specified.

Oracle Select * returns rows but Select count(1) return 0

So, this is bizarre and it's something I have never seen before. I'm hoping someone has the magic answer that can shed some light on this problem...
SELECT * FROM TABLE -- returns rows... a lot of rows
however,
SELECT COUNT(1) FROM TABLE -- returns zero (0), as in the number zero (0) as the result
Here's the table structure:
CREATE TABLE TRACKING (
A_ID NUMBER,
D_CODE NUMBER,
HOD NUMBER,
ADR_CNT NUMBER,
TTL_CNT NUMBER,
CREATED DATE,
MODIFIED DATE
);
CREATE INDEX HOD_D_CODE_IDX ON TRACKING (HOD, D_CODE);
CREATE UNIQUE INDEX TRACKING_PK ON TRACKING (A_ID, D_CODE, HOD);
CREATE INDEX MOD_DATE_IDX ON TRACKING (MODIFIED);
ALTER TABLE TRACKING ADD CONSTRAINT TRACKING_PK PRIMARY KEY (A_ID, D_CODE, HOD);
How can an Oracle table have rows but count(1) return zero? I've done some searching on the web but found nothing. The only other post I found was in relation to MS SQL Server. This is happening in Oracle.
Any idea? Anyone?
Thank you in advance for any help you can provide.
Another thing I might add in hopes that it will help answer the puzzle, this table was used by an Oracle Job to aggregate and populate another table. However, it's been done for a few days now. The other table is fully populated and shows expected record counts. I checked the Oracle Job Log and it shows all success and not a single error.
Wrong results can be caused by corruption, bugs, and features that silently change SQL statements.
Corrupt index. Very rarely an index gets corrupt and the data from an index does not match the data from a table. This causes unexpected results when the query plan changes and an index is used, but everything looks normal for different queries that use table access. Sometimes simply re-building objects can fix this. If it doesn't, you'll need to create a fully reproducible test case (including data); either post it here or submit it to Oracle Support. It can take many hours to track this down.
Bug. Very rarely a bug can cause queries to fail when returning or changing data. Again, a fully reproducible test case is required to
diagnose this, and it can take a while.
Feature that switches SQL There are a few ways to transparently alter SQL statements. Look into Virtual Private Database (VPD), DBMS_ADVANCED_REWRITE, and the SQL Translation Framework.
To rule out #3, the code below shows you one of the evil ways to do this, and how to detect it. First, create the schema and some data:
CREATE TABLE TRACKING (
A_ID NUMBER,
D_CODE NUMBER,
HOD NUMBER,
ADR_CNT NUMBER,
TTL_CNT NUMBER,
CREATED DATE,
MODIFIED DATE
);
CREATE INDEX HOD_D_CODE_IDX ON TRACKING (HOD, D_CODE);
CREATE UNIQUE INDEX TRACKING_PK ON TRACKING (A_ID, D_CODE, HOD);
CREATE INDEX MOD_DATE_IDX ON TRACKING (MODIFIED);
ALTER TABLE TRACKING ADD CONSTRAINT TRACKING_PK PRIMARY KEY (A_ID, D_CODE, HOD);
insert into tracking values (1,2,3,4,5,sysdate,sysdate);
commit;
At first, everything works as expected:
SQL> SELECT * FROM TRACKING;
A_ID D_CODE HOD ADR_CNT TTL_CNT CREATED MODIFIED
---------- ---------- ---------- ---------- ---------- --------- ---------
1 2 3 4 5 17-JUN-16 17-JUN-16
SQL> SELECT COUNT(1) FROM TRACKING;
COUNT(1)
----------
1
Then someone does this:
begin
sys.dbms_advanced_rewrite.declare_rewrite_equivalence(
'april_fools',
'SELECT COUNT(1) FROM TRACKING',
'SELECT 0 FROM TRACKING WHERE ROWNUM = 1',
false);
end;
/
Now the results are "wrong":
SQL> ALTER SESSION SET query_rewrite_integrity = trusted;
Session altered.
SQL> SELECT COUNT(1) FROM TRACKING;
COUNT(1)
----------
0
This can be probably be detected by looking at the explain plan. In the example below, the Predicate 2 - filter(ROWNUM=1) is a clue that something is wrong, since that predicate is not in the original query. Sometimes the "Notes" section of the explain plan will tell you exactly why it was transformed, but sometimes it only gives clues.
SQL> explain plan for SELECT COUNT(1) FROM TRACKING;
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------
Plan hash value: 1761840423
------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2 | 1 (0)| 00:00:01 |
| 1 | VIEW | | 1 | 2 | 1 (0)| 00:00:01 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | INDEX FULL SCAN| HOD_D_CODE_IDX | 1 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(ROWNUM=1)
15 rows selected.
(On an unrelated note - always use COUNT(*) instead of COUNT(1). COUNT(1) is an old myth that looks like cargo cult programming.)
COUNT(SomeColumn) will only return the count of rows that contain non-null values for SomeColumn. COUNT(*) and COUNT('Foo') will return the total number of rows in the table.
Source: Count(*) vs Count(1)
Is it possible that your column 1 has NULL for all registers? If it has no NULL's at all it should work as a Count(*) which returns all rows in that table, and if this is the case we need more information in order to help you.

Can't set the ON COMMIT refresh attribute when creating a materialized view containing partial primary key in Oracle

I need to extract the unique values of a column which is part of the primary key from a table into a materialized view. I can create the materialized view if using "refresh complete" but with no luck when trying to use "refresh fast on commit". Can anyone point out whether I missed anything or Oracle does not support such action.
The example output is listed below. Thanks.
SQL> create table TEST( col1 number, col2 number, col3 varchar(32), CONSTRAINT test_pk Primary Key (col1, col2));
Table created.
SQL> create materialized view test_mv build immediate refresh fast on commit as select distinct col2 from test;
create materialized view test_mv build immediate refresh fast on commit as select distinct col2 from test
*
ERROR at line 1:
ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view
SQL> create materialized view test_mv build immediate refresh complete as select distinct col2 from test;
Materialized view created.
SQL> drop materialized view test_mv;
Materialized view dropped.
SQL> create materialized view log on test;
Materialized view log created.
SQL> create materialized view test_mv build immediate refresh fast on commit as select distinct col2 from test;
create materialized view test_mv build immediate refresh fast on commit as select distinct col2 from test
*
ERROR at line 1:
ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view
Main issue of your view is the DISTINCT clause. On commit fast refresh is super sensitive to underlying query. There exist many rules that must be fulfilled for a materialized view to support fast refresh. DISTINCT prevents it.
You can check the capabilities of a materialized view using DBMS_MVIEW.EXPLAIN_MVIEW procedure:
DECLARE
result SYS.EXPLAINMVARRAYTYPE := SYS.EXPLAINMVARRAYTYPE();
BEGIN
DBMS_MVIEW.EXPLAIN_MVIEW('TEST_MV', result);
FOR i IN result.FIRST..result.LAST LOOP
DBMS_OUTPUT.PUT_LINE(result(i).CAPABILITY_NAME || ': ' || CASE WHEN result(i).POSSIBLE = 'T' THEN 'Yes' ELSE 'No' || CASE WHEN result(i).RELATED_TEXT IS NOT NULL THEN ' because of ' || result(i).RELATED_TEXT END || '; ' || result(i).MSGTXT END);
END LOOP;
END;
You find more information in documentation http://docs.oracle.com/cd/B28359_01/server.111/b28313/basicmv.htm#i1007007
Fast refresh views are picky. This solution requires a materialized view log with specific properties, and a materialized view with a few extra features and a different syntax.
DISTINCT alone does not appear to be supported. But there are aggregate materialized views that support GROUP BY. If that materialized view is created with ENABLE QUERY REWRITE, Oracle can use it in a DISTINCT query. There is also an extra COUNT(*) because "COUNT(*) must always be present to guarantee all types of fast refresh."
Create table, materialized view log, and materialized view.
create table test(col1 number, col2 number, col3 varchar(32)
,constraint test_pk primary key (col1, col2));
create materialized view log on test with rowid (col2) including new values;
create materialized view test_mv
build immediate
refresh fast on commit
enable query rewrite as
select col2, count(*) total from test group by col2;
Queries can use the materialized view.
These explain plans show that the materialized view works for both a GROUP BY and a DISTINCT query.
explain plan for select col2 from test group by col2;
select * from table(dbms_xplan.display);
explain plan for select distinct col2 from test;
select * from table(dbms_xplan.display);
Plan hash value: 1627509066
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 2 (0)| 00:00:01 |
| 1 | MAT_VIEW REWRITE ACCESS FULL| TEST_MV | 1 | 13 | 2 (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Optimizer using an index not present in the current schema

CONNECT alll/all
SELECT /*+ FIRST_ROWS(25) */ employee_id, department_id
FROM hr.employees
WHERE department_id > 50;
Execution Plan
Plan hash value: 2056577954
| Id | Operation | Name | Rows | Bytes |
| 0 | SELECT STATEMENT | | 25 | 200
| 1 | TABLE ACCESS BY INDEX ROWID| EMPLOYEES | 25 | 200
|* 2 | INDEX RANGE SCAN | **EMP_DEPARTMENT_IX** | |
SQL> select * from user_indexes where index_name = 'EMP_DEPARTMENT_IX';
no rows selected
NOTE: There is an index with the same name on the DEPARTMENT column of the EMPLOYEES table in some other schema. And when that index is dropped a FULL TABLE SCAN on EMPLOYEES is performed.
Can the optimizer use that other index from some other schema over here?
You're connected as user ALLL, but you're querying a table in the HR schema:
SELECT /*+ FIRST_ROWS(25) */ employee_id, department_id
FROM hr.employees
WHERE department_id > 50;
You stressed other schema in the question, but seem to have overlooked that the table you're querying is also in another schema. The employees table won't appear in user_tables either.
The index being used is associated with that table, so it's likely to be in the same HR schema. You can see it in all_indexes or dba_indexes; the optimiser will use it even if you can't see it though. And it doesn't have to be in the same schema as the table, though it usually will be; in those views you might notice separate owner and table owner columns.
The schema model would break down if you could only utilise indexes in your own schema when accessing a table in someone else's. Every user would have to create their own copies of the indexes, which would be untenable.
You don't even necessarily have to be able to see the table - if you query a view that hides the underlying table from you (so you have select privs on the view only) the index will still be used in the background. And you might not always be explicitly using the schema prefix, if there is a synonym for the table, or you change your default schema.
Try looking in SYS.INDEXES:
select * from SYS.INDEXES where IXNAME = 'EMP_DEPARTMENT_IX'
Sounds like you are not the owner of the index, as you have noted. As long as your user can access the table data, then the index should be used by the optimizer.

Pattern to substitute for MERGE INTO Oracle syntax when not allowed

I have an application that uses the Oracle MERGE INTO... DML statement to update table A to correspond with some of the changes in another table B (table A is a summary of selected parts of table B along with some other info). In a typical merge operation, 5-6 rows (out of 10's of thousands) might be inserted in table B and 2-3 rows updated.
It turns out that the application is to be deployed in an environment that has a security policy on the target tables. The MERGE INTO... statement can't be used with these tables (ORA-28132: Merge into syntax does not support security policies)
So we have to change the MERGE INTO... logic to use regular inserts and updates instead. Is this a problem anyone else has run into? Is there a best-practice pattern for converting the WHEN MATCHED/WHEN NOT MATCHED logic in the merge statement into INSERT and UPDATE statements? The merge is within a stored procedure, so it's fine for the solution to use PL/SQL in addition to the DML if that is required.
Another way to do this (other than Merge) would be using two sql statements one for insert and one for update. The "WHEN MATCHED" and "WHEN NOT MATCHED" can be handled using joins or "in" Clause.
If you decide to take the below approach, it is better to run the update first (sine it only runs for the matching records) and then insert the non-Matching records. The Data sets would be the same either way, it just updates less number of records with the order below.
Also, Similar to the Merge, this update statement updates the Name Column even if the names in Source and Target match. If you dont want that, add that condition to the where as well.
create table src_table(
id number primary key,
name varchar2(20) not null
);
create table tgt_table(
id number primary key,
name varchar2(20) not null
);
insert into src_table values (1, 'abc');
insert into src_table values (2, 'def');
insert into src_table values (3, 'ghi');
insert into tgt_table values (1, 'abc');
insert into tgt_table values (2,'xyz');
SQL> select * from Src_Table;
ID NAME
---------- --------------------
1 abc
2 def
3 ghi
SQL> select * from Tgt_Table;
ID NAME
---------- --------------------
2 xyz
1 abc
Update tgt_Table tgt
set Tgt.Name =
(select Src.Name
from Src_Table Src
where Src.id = Tgt.id
);
2 rows updated. --Notice that ID 1 is updated even though value did not change
select * from Tgt_Table;
ID NAME
----- --------------------
2 def
1 abc
insert into tgt_Table
select src.*
from Src_Table src,
tgt_Table tgt
where src.id = tgt.id(+)
and tgt.id is null;
1 row created.
SQL> select * from tgt_Table;
ID NAME
---------- --------------------
2 def
1 abc
3 ghi
commit;
There could be better ways to do this, but this seems simple and SQL-oriented. If the Data set is Large, then a PL/SQL solution won't be as performant.
There are at least two options I can think of aside from digging into the security policy, which I don't know much about.
Process the records to merge row by row. Attempt to do the update, if it fails to update then insert, or vise versa, depending on whether you expect most records to need updating or inserting (ie optimize for the most common case that will reduce the number of SQL statements fired), eg:
begin
for row in (select ... from source_table) loop
update table_to_be_merged
if sql%rowcount = 0 then -- no row matched, so need to insert
insert ...
end if;
end loop;
end;
Another option may be to bulk collect the records you want to merge into an array, and then attempted to bulk insert them, catching all the primary key exceptions (I cannot recall the syntax for this right now, but you can get a bulk insert to place all the rows that fail to insert into another array and then process them).
Logically a merge statement has to check for the presence of each records behind the scenes anyway, and I think it is processed quite similarly to the code I posted above. However, merge will always be more efficient than coding it in PLSQL as it will be only 1 SQL call instead of many.

Resources