Explanation of Oracle PARTITION BY vs GROUP BY for similar results - oracle

I have the following table (Marks):
firstname lastname Mark
------------------------------
arun prasanth 40
ann antony 45
sruthy abc 41
new abc 47
arun prasanth 45
arun prasanth 49
ann antony 49
And would like to add a column that tags if a record with specific columns occurs more than once. This is the result:
firstname lastname Mark MULTI_FLAG
----------------------------------------------
arun prasanth 40 1
ann antony 45 1
sruthy abc 41 0
new abc 47 0
arun prasanth 45 1
arun prasanth 49 1
ann antony 49 1
I can get the result with the following GROUP BY query:
SELECT M1.firstname
,M1.lastname
,M1.Mark
,M2.MULTI_COUNT
FROM Marks M1
JOIN (SELECT firstname, lastname, CASE WHEN COUNT (*) > 1 THEN 1 ELSE 0 END AS MULTI_COUNT
FROM Marks
GROUP BY firstname, lastname) M2
ON M2.firstname = M1.firstname AND M2.lastname = M1.lastname;
Or by this much prettier PARTITION BY query:
SELECT
firstname,
lastname,
CASE WHEN COUNT(*) OVER (PARTITION BY
firstname,
lastname) > 1 THEN 1 ELSE 0 END AS MULTI_FLAG
FROM
Marks
Running the GROUP BY query on a similar large table returned in:
34 m 56 s 595 ms
Running the PARTITION BY query on a similar large table returned in:
First run: 55 m 47 s 851 ms
Second run: 36 m 46 s 95 ms
I would be interested in knowing:
The best way to achieve my results
What accounts for the performance difference.
EDIT: How to read the query plan.
EDIT:
Oracle Version
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
"CORE 11.2.0.3.0 Production"
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
PARTITION BY Plan
PLAN_TABLE_OUTPUT
Plan hash value: 3822227444
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 668K| 90M| | 90429 (1)| 00:18:06 |
| 1 | WINDOW SORT | | 668K| 90M| 98M| 90429 (1)| 00:18:06 |
|* 2 | HASH JOIN RIGHT OUTER | | 668K| 90M| | 69340 (1)| 00:13:53 |
| 3 | TABLE ACCESS FULL | COUNTRY_REGION_MAPPINGS | 177 | 4779 | | 3 (0)| 00:00:01 |
| 4 | NESTED LOOPS | | | | | | |
| 5 | NESTED LOOPS | | 377K| 41M| | 69335 (1)| 00:13:53 |
| 6 | MAT_VIEW ACCESS FULL | PROJINFO_MAX_ITER_MVW | 17713 | 328K| | 782 (1)| 00:00:10 |
|* 7 | INDEX RANGE SCAN | Q_CLIN_ASSUM_BYCOUN_PK | 1 | | | 3 (0)| 00:00:01 |
| 8 | TABLE ACCESS BY INDEX ROWID| Q_CLINICAL_ASSUM_BYCOUNTRY | 21 | 2016 | | 4 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(UPPER("CRM"."COUNTRY"(+))=UPPER("QCAB"."TRIAL_COUNTRY"))
7 - access("PMIM"."OPPORTUNITYNUM"="QCAB"."OPPORTUNITYNUM" AND "PMIM"."CONTRACTNUM"="QCAB"."CONTRACTNUM"
AND "PMIM"."ITERATION"="QCAB"."ITERATION")
filter(UPPER("QCAB"."SHEET_LOC") LIKE '%COUNTRY ASSUMPTIONS%' OR UPPER("QCAB"."SHEET_LOC") LIKE
'INPUT%')
GROUP BY Plan
PLAN_TABLE_OUTPUT
Plan hash value: 648231064
------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 912 | 2052K| | 226K (1)| 00:45:22 |
|* 1 | HASH JOIN | | 912 | 2052K| | 226K (1)| 00:45:22 |
| 2 | TABLE ACCESS FULL | COUNTRY_REGION_MAPPINGS | 177 | 4779 | | 3 (0)| 00:00:01 |
|* 3 | HASH JOIN | | 89667 | 194M| 45M| 226K (1)| 00:45:22 |
| 4 | NESTED LOOPS | | | | | | |
| 5 | NESTED LOOPS | | 377K| 41M| | 69335 (1)| 00:13:53 |
| 6 | MAT_VIEW ACCESS FULL | PROJINFO_MAX_ITER_MVW | 17713 | 328K| | 782 (1)| 00:00:10 |
|* 7 | INDEX RANGE SCAN | Q_CLIN_ASSUM_BYCOUN_PK | 1 | | | 3 (0)| 00:00:01 |
| 8 | TABLE ACCESS BY INDEX ROWID | Q_CLINICAL_ASSUM_BYCOUNTRY | 21 | 2016 | | 4 (0)| 00:00:01 |
| 9 | VIEW | | 668K| 1377M| | 86518 (1)| 00:17:19 |
| 10 | HASH GROUP BY | | 668K| 72M| 80M| 86518 (1)| 00:17:19 |
|* 11 | HASH JOIN RIGHT OUTER | | 668K| 72M| | 69340 (1)| 00:13:53 |
| 12 | TABLE ACCESS FULL | COUNTRY_REGION_MAPPINGS | 177 | 2478 | | 3 (0)| 00:00:01 |
| 13 | NESTED LOOPS | | | | | | |
| 14 | NESTED LOOPS | | 377K| 35M| | 69335 (1)| 00:13:53 |
| 15 | MAT_VIEW ACCESS FULL | PROJINFO_MAX_ITER_MVW | 17713 | 328K| | 782 (1)| 00:00:10 |
|* 16 | INDEX RANGE SCAN | Q_CLIN_ASSUM_BYCOUN_PK | 1 | | | 3 (0)| 00:00:01 |
| 17 | TABLE ACCESS BY INDEX ROWID| Q_CLINICAL_ASSUM_BYCOUNTRY | 21 | 1701 | | 4 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("R2"."TRIAL_COUNTRY_CD"="CRM"."COUNTRY_CD" AND
UPPER("CRM"."COUNTRY")=UPPER("QCAB"."TRIAL_COUNTRY"))
3 - access("R2"."OPPORTUNITYNUM"="QCAB"."OPPORTUNITYNUM" AND "R2"."ITERATION"="QCAB"."ITERATION" AND
"R2"."CONTRACTNUM"="QCAB"."CONTRACTNUM" AND "R2"."ASSUMPTION"="QCAB"."ASSUMPTION")
7 - access("PMIM"."OPPORTUNITYNUM"="QCAB"."OPPORTUNITYNUM" AND "PMIM"."CONTRACTNUM"="QCAB"."CONTRACTNUM" AND
"PMIM"."ITERATION"="QCAB"."ITERATION")
filter(UPPER("QCAB"."SHEET_LOC") LIKE '%COUNTRY ASSUMPTIONS%' OR UPPER("QCAB"."SHEET_LOC") LIKE 'INPUT%')
11 - access(UPPER("CRM"."COUNTRY"(+))=UPPER("QCAB"."TRIAL_COUNTRY"))
16 - access("PMIM"."OPPORTUNITYNUM"="QCAB"."OPPORTUNITYNUM" AND "PMIM"."CONTRACTNUM"="QCAB"."CONTRACTNUM" AND
"PMIM"."ITERATION"="QCAB"."ITERATION")
filter(UPPER("QCAB"."SHEET_LOC") LIKE '%COUNTRY ASSUMPTIONS%' OR UPPER("QCAB"."SHEET_LOC") LIKE 'INPUT%')

Typically you start with the analytic function count(*) which leads to a compact SQL.
The drawback of this aproach is that the data must be sorted (see WINDOW SORT operation). The GROUP BY approach avoids
the sorting as HASH GROUP BY may be used, which can lead to a better performance.
Your example is a bit more involved, as you do not use table but a view that joins three tables - this join is performed twice, for the GROUP BY and for the detail data; which
is of course not optimal.
So I'll start with the analytic function version of the query (possible with a PARALLELoption).
If you want to try the GROUP BY a lightway version is possible:
1) group only the duplicated keys
2) make OUTER JOIN to assign the MULTI_FLAG
example with execution plan below - simple test with your data
with dups as (
select firstname,lastname from tmp
group by firstname,lastname
having count(*) > 1)
select tmp.FIRSTNAME, tmp.LASTNAME, tmp.MARK,
case when dups.firstname is not NULL then 1 else 0 end as MULTI_FLAG
from tmp
left outer join dups on tmp.firstname = dups.firstname and tmp.lastname = dups.lastname;
You still need to access your view twice, but the final join will be faster (espetially if you have only small number of duplicated keys).
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 105K| 26M| | 1673 (1)| 00:00:21 |
|* 1 | HASH JOIN RIGHT OUTER| | 105K| 26M| 11M| 1673 (1)| 00:00:21 |
| 2 | VIEW | | 105K| 10M| | 128 (4)| 00:00:02 |
|* 3 | FILTER | | | | | | |
| 4 | HASH GROUP BY | | 105K| 10M| | 128 (4)| 00:00:02 |
| 5 | TABLE ACCESS FULL| TMP | 105K| 10M| | 125 (1)| 00:00:02 |
| 6 | TABLE ACCESS FULL | TMP | 105K| 15M| | 125 (1)| 00:00:02 |
--------------------------------------------------------------------------------------

Related

Limit rows examined in Oracle

My table has millions of records. In this query below, can I make Oracle 12c examine the first X rows only instead of doing a full table scan?
The value of X, I imagine should be Offset + Fetch Next , so in this case 15
SELECT * FROM table OFFSET 5 ROWS FETCH NEXT 10 ROWS ONLY;
Thanks in advance
Edit 1
These are the tables involved and this is the actual query
Orders - This table has 113k records in my test DB ( and over 8 million in prod db like my original question mentioned)
--------------------------
| Id | SKUField1|SKUField2|
--------------------------
| 1 | Value1 | Value2 |
| 2 | Value2 | Value2 |
| 3 | Value1 | Value3 |
--------------------------
Products - This table has 2 million records in my test DB ( prod db is similar)
---------------
| PId| SKU_NUM|
---------------
| 1 | Value1 |
| 2 | Value2 |
| 3 | Value3 |
---------------
Note that values of Orders.SKUField1 and Orders.SKUField2 come from the Products.SKU_NUM values
Actual Query:
SELECT /*+ gather_plan_statistics */ Id, PId, SKUField1, SKUField2, SKU_NUM
FROM Orders
LEFT JOIN (
-- this inner query reduces size of Products from 2 million rows down to 1462 rows
select * from Products where SKU_NUM in (
select SKUField1 from Orders
)
) p1 ON SKUField1 = p1.SKU_NUM
LEFT JOIN (
-- this inner query reduces size of table B from 2 million rows down to 459 rows
select * from Products where SKU_NUM in (
select SKUField2 from Orders
)
) p4 ON SKUField2 = p4.SKU_NUM
OFFSET 5 ROWS FETCH NEXT 10 ROWS ONLY
Execution Plan:
--------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Time | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 10 |00:00:00.06 | 8013 | | | |
|* 1 | VIEW | | 1 | 00:00:01 | 10 |00:00:00.06 | 8013 | | | |
|* 2 | WINDOW NOSORT STOPKEY | | 1 | 00:00:01 | 15 |00:00:00.06 | 8013 | 27M| 1904K| |
|* 3 | HASH JOIN RIGHT OUTER | | 1 | 00:00:01 | 15 |00:00:00.06 | 8013 | 1162K| 1162K| 1344K (0)|
| 4 | VIEW | | 1 | 00:00:01 | 1462 |00:00:00.04 | 6795 | | | |
| 5 | NESTED LOOPS | | 1 | 00:00:01 | 1462 |00:00:00.04 | 6795 | | | |
| 6 | NESTED LOOPS | | 1 | 00:00:01 | 1462 |00:00:00.04 | 5333 | | | |
| 7 | SORT UNIQUE | | 1 | 00:00:01 | 1469 |00:00:00.04 | 3010 | 80896 | 80896 |71680 (0)|
| 8 | TABLE ACCESS FULL | Orders | 1 | 00:00:01 | 113K|00:00:00.02 | 3010 | | | |
|* 9 | INDEX UNIQUE SCAN | UIX_Product_SKU_NUM | 1469 | 00:00:01 | 1462 |00:00:00.01 | 2323 | | | |
| 10 | TABLE ACCESS BY INDEX ROWID | Products | 1462 | 00:00:01 | 1462 |00:00:00.01 | 1462 | | | |
|* 11 | HASH JOIN RIGHT OUTER | | 1 | 00:00:01 | 15 |00:00:00.02 | 1218 | 1142K| 1142K| 1335K (0)|
| 12 | VIEW | | 1 | 00:00:01 | 459 |00:00:00.02 | 1213 | | | |
| 13 | NESTED LOOPS | | 1 | 00:00:01 | 459 |00:00:00.02 | 1213 | | | |
| 14 | NESTED LOOPS | | 1 | 00:00:01 | 459 |00:00:00.02 | 754 | | | |
| 15 | SORT UNIQUE | | 1 | 00:00:01 | 462 |00:00:00.02 | 377 | 24576 | 24576 |22528 (0)|
| 16 | INDEX FAST FULL SCAN | Orders_SKUField2_IDX6 | 1 | 00:00:01 | 113K|00:00:00.01 | 377 | | | |
|* 17 | INDEX UNIQUE SCAN | UIX_Product_SKU_NUM | 462 | 00:00:01 | 459 |00:00:00.01 | 377 | | | |
| 18 | TABLE ACCESS BY INDEX ROWID| Products | 459 | 00:00:01 | 459 |00:00:00.01 | 459 | | | |
| 19 | TABLE ACCESS FULL | Orders | 1 | 00:00:01 | 15 |00:00:00.01 | 5 | | | |
--------------------------------------------------------------------------------------------------------------------------------------------------
Hence, based on the "A-Rows" column values for row Ids 8 and 16 in the execution plan, it seems like there are full table scans on the Orders table (though row id 16 atleast seems to be using an index). So my question is is it true that there is a full table scan on the orders table even though I am using Offset/Fetch Next
Although your FETCH clause may use a full table scan, Oracle will still only fetch the first X rows from the table.
In the following example, the "TABLE ACCESS FULL" operation does start to read the entire table, but it gets cutoff part of the way through by the "WINDOW NOSORT STOPKEY" operation. Not all full table scans actually scan the full table. You would see similar behavior if your code ended with WHERE ROWNUM <= 50.
CREATE TABLE some_table AS SELECT * FROM all_objects;
EXPLAIN PLAN FOR SELECT * FROM some_table OFFSET 5 ROWS FETCH NEXT 10 ROWS ONLY;
SELECT * FROM TABLE(dbms_xplan.display);
Plan hash value: 2559837639
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 15 | 7410 | 2 (0)| 00:00:01 |
|* 1 | VIEW | | 15 | 7410 | 2 (0)| 00:00:01 |
|* 2 | WINDOW NOSORT STOPKEY| | 15 | 2010 | 2 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | SOME_TABLE | 15 | 2010 | 2 (0)| 00:00:01 |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=15 AND
"from$_subquery$_002"."rowlimit_$$_rownumber">5)
2 - filter(ROW_NUMBER() OVER ( ORDER BY NULL )<=15)
The performance implications get more complicated if you also want to order the results. If that is the case, you may want to post the full query and execution plan.
(EDIT: 2022-09-25)
Yes, there is a full table scan on the ORDERS table happening on line 8 of the execution plan. As you mentioned, you can look at the "A-rows" column to tell what's really happening.
But the third full table scan of ORDERS, on line 19, is not a "full" full table scan. The operation "WINDOW NOSORT STOPKEY" stops that full table scan as soon as the 15 necessary rows are read. So the FETCH syntax is helping at least a little.
Applying a FETCH to a query does not mean that every single table will be limited. Although, in your query, it does seem like there ought to be a way to reduce the full table scans. Perhaps an index on SKUField1 would help?
Since Oracle as I know don't provide something like limit or top you can created by yourself like the following:
what is happening here, the inner query gets all the first 10 records and the outer query get those, you can still use any clauses like where or order or any others
SELECT * FROM (
SELECT * FROM Customers WHERE CustomerID <= 10 ORDER BY CustomerID
)
The full article will be found about this topic here at Oracle-Fetch
I am using Online Oracle so you can try it from your end, please let me know if you still have a problem.

how to speed up the order by query in oracle

my below pagination query runs faster (2.5 sec) without order by .
if I use order by it get slower (180 sec).
Total number of record is only 90000
select * from(
select i.*,rownum rno from (
select opp.updat,nvl(s.name,c.vemail),s.name,c.vemail
from sfa_opportunities opp,sfa_company s, customer c
where opp.companyid = c.companyid(+)
and opp.custid = c.custid(+)
and opp.companyid = s.companyid(+)
and opp.sfacompid = s.sfacompid(+)
order by 2 asc, 1 asc
)i) where rno >= 1 and rno <= 30
I have given the explain plan below for reference.
---------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 97980 | 110M| | 14137 (1)| 00:03:18 |
|* 1 | VIEW | | 97980 | 110M| | 14137 (1)| 00:03:18 |
| 2 | COUNT | | | | | | |
| 3 | VIEW | | 97980 | 109M| | 14137 (1)| 00:03:18 |
| 4 | SORT ORDER BY | | 97980 | 6602K| 15M| 14137 (1)| 00:03:18 |
| 5 | NESTED LOOPS OUTER | | 97980 | 6602K| | 13137 (1)| 00:03:04 |
|* 6 | HASH JOIN RIGHT OUTER | | 97980 | 3635K| 1136K| 614 (1)| 00:00:09 |
| 7 | TABLE ACCESS FULL | SFA_COMPANY | 34851 | 714K| | 58 (0)| 00:00:01 |
| 8 | TABLE ACCESS FULL | SFA_OPPORTUNITIES | 97980 | 1626K| | 390 (1)| 00:00:06 |
|* 9 | TABLE ACCESS BY INDEX ROWID| CUSTOMER | 1 | 31 | | 1 (0)| 00:00:01 |
|* 10 | INDEX UNIQUE SCAN | PK_CUSTOMER_CUSTID | 1 | | | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("RNO"<=30 AND "RNO">=1)
6 - access("OPP"."COMPANYID"="S"."COMPANYID"(+) AND "OPP"."SFACOMPID"="S"."SFACOMPID"(+))
9 - filter("OPP"."COMPANYID"="C"."COMPANYID"(+))
10 - access("OPP"."CUSTID"="C"."CUSTID"(+))
You're sorting on nvl(s.name,c.vemail), then opp.updat. My guess is the NVL may prevent a lot of optimization, because Oracle can't tell what the value of that column is going to be without looking at every row in the joined result. You could try adding indexes on those three columns or a function based index on nvl(s.name,c.vemail).

Optimize TO_TIMESTAMP() query within Oracle

every time I execute this query it takes like 2 minutes to execute:
select * from CPOB_Monitoring_Dashboard
where VOYAGE_STRT_DT >= TO_TIMESTAMP('2014-07-03 00:00:00.000','YYYY-MM-DD HH24:MI:SS.FF')
and VOYAGE_STRT_DT <= TO_TIMESTAMP('2018-07-03 00:00:00.000','YYYY-MM-DD HH24:MI:SS.FF')
However if I change it to use TO_DATE instead of TO_TIMESTAMP is really fast.
Linq is generating the query using TOTIMESTAMP and I've not found yet a way to change that to use TO_DATE, is there any way that I can optimize the TOTIMESTAMP query??
Here is the Execution Plan for the query using TOTIMESTAMP:
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 246273147
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 21842 | 4820K| | 1336 (1)| 00:00:17 |
| 1 | VIEW | CPOB_Monitoring_Dashboard| 21842 | 4820K| | 1336 (1)| 00:00:17 |
| 2 | HASH UNIQUE | | 21842 | 3988K| 4384K| 1336 (1)| 00:00:17 |
| 3 | NESTED LOOPS | | 21842 | 3988K| | 442 (1)| 00:00:06 |
| 4 | NESTED LOOPS | | 47 | 7661 | | 160 (1)| 00:00:02 |
|* 5 | TABLE ACCESS FULL | VOYAGE_INFO | 46 | 1012 | | 68 (0)| 00:00:01 |
| 6 | TABLE ACCESS BY INDEX ROWID| PROCESS_CTRL | 1 | 141 | | 2 (0)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | VOYAGE_ID_IDX | 1 | | | 1 (0)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | PLY_IDX2 | 467 | 11208 | | 6 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
5 - filter(INTERNAL_FUNCTION("CPVI"."VOYAGE_STRT_DT")>=TIMESTAMP' 2014-07-03 00:00:00.000000000' AND
INTERNAL_FUNCTION("CPVI"."VOYAGE_STRT_DT")<=TIMESTAMP' 2018-07-03 00:00:00.000000000')
7 - access("CPC"."VOYAGE_ID"="CPVI"."VOYAGE_ID")
8 - access("CPC"."BRAND_NAME"="CPOB"."BRAND_ID" AND "CPC"."SHIP_NAME"=""SHIP_NAME")
filter("CPC"."SHIP_NAME"="CPOB"."SHIP_NAME")
24 rows selected.

oracle 12c query is slow

below code works well in 11g BUT only in 12c works so slow (over 10 sec).
SELECT * FROM DMProgValue_00001
WHERE 1=1
AND ProgressOID IN (
SELECT P.OID FROM (
SELECT OID FROM (
SELECT A.OID, ROWNUM as seqNum FROM (
SELECT OID FROM DMProgress_00001 WHERE 1=1
AND Project = 'Q539'
ORDER BY actCode
) A
WHERE ROWNUM <= 40
) WHERE seqNum > 20
) P
);
Plan hash value: 763232015
-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 189 | 171 (1)| 00:00:01 |
|* 1 | FILTER | | | | | |
| 2 | TABLE ACCESS FULL | DMPROGVALUE_00001 | 1 | 189 | 68 (0)| 00:00:01 |
|* 3 | VIEW | | 20 | 800 | 103 (1)| 00:00:01 |
|* 4 | COUNT STOPKEY | | | | | |
| 5 | VIEW | | 2916 | 78732 | 103 (1)| 00:00:01 |
|* 6 | SORT ORDER BY STOPKEY| | 2916 | 130K| 103 (1)| 00:00:01 |
|* 7 | TABLE ACCESS FULL | DMPROGRESS_00001 | 2916 | 130K| 102 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter( EXISTS (SELECT 0 FROM (SELECT "A"."OID" "OID",ROWNUM "SEQNUM" FROM
(SELECT "OID" "OID" FROM "DMPROGRESS_00001" "DMPROGRESS_00001" WHERE
"PHASE"='Construction' AND "PROJECT"='Q539' ORDER BY "ACTCODE") "A" WHERE ROWNUM<=40)
"from$_subquery$_003" WHERE "SEQNUM">20 AND "OID"=:B1))
3 - filter("SEQNUM">20 AND "OID"=:B1)
4 - filter(ROWNUM<=40)
6 - filter(ROWNUM<=40)
7 - filter("PHASE"='Construction' AND "PROJECT"='Q539')
DMProgress_0001 stats
NUM_ROW : 10385
BLOCKS : 370
AVG_ROW_LEN : 176
SMAPLE_SIZE : 8263
DMProgvalue_0001 stats
NUM_ROW : 15703
BLOCKS : 244
AVG_ROW_LEN : 49
SMAPLE_SIZE : 5033
It's only 10k rows and Indexs are well made ( I can tell because of 11g experience). I know some detour way to make fast ( below code - 0.001 sec) BUT I want to know real problem and fix it.
I cannot understand it only has one sub query and 10k rows for each table. Not just compared to 11g, There is no way that this query takes over 10 sec.
SELECT * FROM DMProgValue_00001 V,
(SELECT OID FROM (
SELECT A.OID, ROWNUM as seqNum FROM (
SELECT OID FROM DMProgress_00001 WHERE 1=1
AND Project = 'Q539'
ORDER BY actCode
) A
WHERE ROWNUM <= 40
) WHERE seqNum > 20
) P
WHERE 1=1
AND V.ProgressOID = P.OID;
added 11g similar query plan
Plan hash value: 3049049852
-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 9 | 684 | 49 (5)| 00:00:01 |
|* 1 | HASH JOIN RIGHT SEMI | | 9 | 684 | 49 (5)| 00:00:01 |
| 2 | VIEW | VW_NSO_1 | 3 | 81 | 35 (3)| 00:00:01 |
|* 3 | VIEW | | 3 | 75 | 35 (3)| 00:00:01 |
|* 4 | COUNT STOPKEY | | | | | |
| 5 | VIEW | | 3 | 36 | 35 (3)| 00:00:01 |
|* 6 | SORT ORDER BY STOPKEY| | 3 | 144 | 35 (3)| 00:00:01 |
|* 7 | TABLE ACCESS FULL | DMPROGRESS_00037 | 3 | 144 | 34 (0)| 00:00:01 |
| 8 | TABLE ACCESS FULL | DMPROGVALUE_00037 | 5106 | 244K| 13 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("PROGRESSOID"="OID")
3 - filter("SEQNUM">20)
4 - filter(ROWNUM<=40)
6 - filter(ROWNUM<=40)
7 - filter("DISPLINE"='Q340' AND "PHASE"='Procurement' AND "PROJECT"='Moho')
oracle 11g automatically change the query as Hash join BUT 12c does not. I think this is point. They are same structure.

inefficient SQL plan on the partitioned Table

we are facing issue with one of query which joins between few tables.
even though there are few hundreds of records in the table the plan is going to Merge join thinking only one record in the table, please find the below plan.
when the Merge sort plan is used the query fails with temp space issue.
oracle choose Merge plan only when job loaded to newly created partition. but the rest of old partitions it is choosing Hash Join where we get results in few seconds.
for information. All the joined table has same volume.
could you please explain why this is happening?
Merg join( query hung)
-----------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 712 | 36 (3)| 00:00:01 | | |
|* 1 | HASH JOIN | | 1 | 712 | 36 (3)| 00:00:01 | | |
| 2 | MERGE JOIN CARTESIAN | | 1 | 679 | 28 (0)| 00:00:01 | | |
| 3 | MERGE JOIN CARTESIAN | | 1 | 615 | 21 (0)| 00:00:01 | | |
| 4 | MERGE JOIN CARTESIAN | | 1 | 388 | 14 (0)| 00:00:01 | | |
| 5 | PARTITION RANGE SINGLE | | 1 | 105 | 7 (0)| 00:00:01 | 4 | 4 |
|* 6 | TABLE ACCESS FULL | PCA_DCM_CLNT_BSPKE_REFS_M_LND | 1 | 105 | 7 (0)| 00:00:01 | 4 | 4 |
| 7 | BUFFER SORT | | 1 | 283 | 7 (0)| 00:00:01 | | |
| 8 | PARTITION RANGE SINGLE| | 1 | 283 | 7 (0)| 00:00:01 | 4 | 4 |
|* 9 | TABLE ACCESS FULL | PCA_DCM_INDBTDNS_BLK_M_LND | 1 | 283 | 7 (0)| 00:00:01 | 4 | 4 |
| 10 | BUFFER SORT | | 1 | 227 | 14 (0)| 00:00:01 | | |
| 11 | PARTITION RANGE SINGLE | | 1 | 227 | 7 (0)| 00:00:01 | 4 | 4 |
|* 12 | TABLE ACCESS FULL | PCA_DCM_DELPHI_BLK_M_LND | 1 | 227 | 7 (0)| 00:00:01 | 4 | 4 |
| 13 | BUFFER SORT | | 1 | 64 | 21 (0)| 00:00:01 | | |
| 14 | PARTITION RANGE SINGLE | | 1 | 64 | 7 (0)| 00:00:01 | 4 | 4 |
|* 15 | TABLE ACCESS FULL | PCA_DCM_APACS_BLK_M_LND | 1 | 64 | 7 (0)| 00:00:01 | 4 | 4 |
| 16 | PARTITION RANGE SINGLE | | 1 | 33 | 7 (0)| 00:00:01 | 4 | 4 |
|* 17 | TABLE ACCESS FULL | PCA_DCM_SCORE_BLK_M_LND | 1 | 33 | 7 (0)| 00:00:01 | 4 | 4 |
-----------------------------------------------------------------------------------------------------------------------------
Hash join(Few seconds we get the results)
----------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 491 | 341K| 74 (3)| 00:00:01 | | |
|* 1 | HASH JOIN | | 491 | 341K| 74 (3)| 00:00:01 | | |
| 2 | PARTITION RANGE SINGLE | | 493 | 109K| 14 (0)| 00:00:01 | 3 | 3 |
|* 3 | TABLE ACCESS FULL | PCA_DCM_DELPHI_BLK_M_LND | 493 | 109K| 14 (0)| 00:00:01 | 3 | 3 |
|* 4 | HASH JOIN | | 491 | 232K| 60 (4)| 00:00:01 | | |
|* 5 | HASH JOIN | | 492 | 99384 | 45 (3)| 00:00:01 | | |
|* 6 | HASH JOIN | | 492 | 47724 | 31 (4)| 00:00:01 | | |
| 7 | PARTITION RANGE SINGLE| | 493 | 16269 | 14 (0)| 00:00:01 | 3 | 3 |
|* 8 | TABLE ACCESS FULL | PCA_DCM_SCORE_BLK_M_LND | 493 | 16269 | 14 (0)| 00:00:01 | 3 | 3 |
| 9 | PARTITION RANGE SINGLE| | 493 | 31552 | 16 (0)| 00:00:01 | 3 | 3 |
|* 10 | TABLE ACCESS FULL | PCA_DCM_APACS_BLK_M_LND | 493 | 31552 | 16 (0)| 00:00:01 | 3 | 3 |
| 11 | PARTITION RANGE SINGLE | | 493 | 51765 | 14 (0)| 00:00:01 | 3 | 3 |
|* 12 | TABLE ACCESS FULL | PCA_DCM_CLNT_BSPKE_REFS_M_LND | 493 | 51765 | 14 (0)| 00:00:01 | 3 | 3 |
| 13 | PARTITION RANGE SINGLE | | 493 | 136K| 14 (0)| 00:00:01 | 3 | 3 |
|* 14 | TABLE ACCESS FULL | PCA_DCM_INDBTDNS_BLK_M_LND | 493 | 136K| 14 (0)| 00:00:01 | 3 | 3 |
----------------------------------------------------------------------------------------------------------------------------
Please find the query
SELECT
substr(BLK.ACC_NUM,1,14) AS ACCOUNT_NUMBER,
CASE WHEN SUBSTR(BLK.ACC_NUM,20,1) = '1' THEN 'F1'
WHEN SUBSTR(BLK.ACC_NUM,20,1) = ' ' THEN 'F1'
WHEN SUBSTR(BLK.ACC_NUM,20,1) = '0' THEN 'F1'
WHEN SUBSTR(BLK.ACC_NUM,20,1) = '2' THEN 'F2'
END FLTR,
DELPHI.ND_SPA_CII_SPA
FROM
BUR_LND.PCA_DCM_SCORE_BLK_M_LND BLK
INNER JOIN BUR_LND.PCA_DCM_CLNT_BSPKE_REFS_M_LND REFFS
ON BLK.ACC_NUM= REFFS.ACC_NUM
INNER JOIN BUR_LND.PCA_DCM_INDBTDNS_BLK_M_LND IND
ON BLK.ACC_NUM= IND.ACC_NUM
INNER JOIN BUR_LND.PCA_DCM_DELPHI_BLK_M_LND DELPHI
ON BLK.ACC_NUM= DELPHI.ACC_NUM
INNER JOIN BUR_LND.PCA_DCM_APACS_BLK_M_LND APACS
ON BLK.ACC_NUM= APACS.ACC_NUM
WHERE
BLK.EFF_DT=TO_DATE('2013-10-30','YYYY-MM-DD')
AND REFFS.EFF_DT=TO_DATE('2013-10-30','YYYY-MM-DD')
AND IND.EFF_DT=TO_DATE('2013-10-30','YYYY-MM-DD')
AND DELPHI.EFF_DT=TO_DATE('2013-10-30','YYYY-MM-DD')
AND APACS.EFF_DT=TO_DATE('2013-10-30','YYYY-MM-DD')
Thanks in advance for help.
Thanks
arkesh
The plans are bad because the new partition is missing statistics. Statistics should be updated after partition changes, ideally using incremental statistics. If that's not possible then a hint like /*+ dynamic_sampling(4) */ can help.
Statistics can be accurate, inaccurate, or missing. Missing statistics are generally not a huge problem because of dynamic sampling. With the default dynamic sampling level, 2, Oracle will automatically gather statistics if a statement includes tables without statistics.
Unfortunately for this case, Oracle only considers missing statistics per table, not per partition. (That would be a good feature request, but that won't help you right now.) With the literals in the SQL statement Oracle appears to know exactly which partition to look in. Since there are no statistics for that partition it assumes there are no rows, leading to the bad plans.
Example
Create a sample partitioned table with 1000 rows but no gathered statistics.
create table partition_test
(
a number,
b number
)
partition by range (a)
(
partition p1 values less than (2)
);
insert into partition_test select 1, level from dual connect by level <= 1000;
When there are no statistics Oracle will use dynamic sampling automatically and get a good row count. You can't see it in this simple plan, but normally this would lead to better access methods and better join operations.
explain plan for select * from partition_test where a = 1 and b <= 1000;
select * from table(dbms_xplan.display);
Plan hash value: 4097357352
---------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1000 | 26000 | 16 (0)| 00:00:01 | | |
| 1 | PARTITION RANGE SINGLE| | 1000 | 26000 | 16 (0)| 00:00:01 | 1 | 1 |
|* 2 | TABLE ACCESS FULL | PARTITION_TEST | 1000 | 26000 | 16 (0)| 00:00:01 | 1 | 1 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("A"=1 AND "B"<=1000)
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
Gather stats, then create a new partition with data. Although that partition has 1000 rows and no stats, Oracle does not know that and just assumes it's empty.
begin
dbms_stats.gather_table_stats(user, 'partition_test');
end;
/
alter table partition_test add partition p2 values less than (3);
insert into partition_test select 2, level from dual connect by level <= 1000;
explain plan for select * from partition_test where a = 2 and b <= 1000;
select * from table(dbms_xplan.display);
Plan hash value: 4097357352
---------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 | 9 (0)| 00:00:01 | | |
| 1 | PARTITION RANGE SINGLE| | 1 | 7 | 9 (0)| 00:00:01 | 2 | 2 |
|* 2 | TABLE ACCESS FULL | PARTITION_TEST | 1 | 7 | 9 (0)| 00:00:01 | 2 | 2 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("A"=2 AND "B"<=1000)
Explicitly requesting dynamic sampling will fix the cardinality estimates, which would likely solve your execution plan problems.
explain plan for select /*+ dynamic_sampling(4) */ * from partition_test where a = 2 and b <= 1000;
select * from table(dbms_xplan.display);
Plan hash value: 4097357352
---------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1000 | 7000 | 9 (0)| 00:00:01 | | |
| 1 | PARTITION RANGE SINGLE| | 1000 | 7000 | 9 (0)| 00:00:01 | 2 | 2 |
|* 2 | TABLE ACCESS FULL | PARTITION_TEST | 1000 | 7000 | 9 (0)| 00:00:01 | 2 | 2 |
---------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("A"=2 AND "B"<=1000)
Note
-----
- dynamic statistics used: dynamic sampling (level=4)

Resources