How to optimize oracle query? - oracle

Following query is taking around 45 seconds in oracle 11g
select count(cap.ISHIGH),ms.SID,ms.NUM from CDetail cap,MData ms
where cap.MDataID_FK=ms.MDataID_PK and trunc(cap.CREATEDTIME) between trunc(sysdate-10) and trunc(sysdate)
group by ms.SID,ms.NUM ;
explain plan :
-------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 766K| 32M| | 94421 (1)| 00:18:54 |
| 1 | HASH GROUP BY | | 766K| 32M| 41M| 94421 (1)| 00:18:54 |
|* 2 | HASH JOIN | | 766K| 32M| 21M| 85716 (1)| 00:17:09 |
| 3 | VIEW | VW_GBC_5 | 766K| 13M| | 73348 (1)| 00:14:41 |
| 4 | HASH GROUP BY | | 766K| 13M| 98M| 73348 (1)| 00:14:41 |
|* 5 | FILTER | | | | | | |
| 6 | TABLE ACCESS BY INDEX ROWID| CDetail | 3217K| 58M| | 63738 (1)| 00:12:45 |
|* 7 | INDEX RANGE SCAN | IDX_CPCTYDTLTRNCCRTDTM | 3365K| | | 14679 (1)| 00:02:57 |
| 8 | TABLE ACCESS FULL | MData | 871K| 22M| | 9665 (1)| 00:01:56 |
-------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("ITEM_1"="MS"."MDataID_PK")
5 - filter(TRUNC(SYSDATE#!-10)<=TRUNC(SYSDATE#!))
7 - access(TRUNC(INTERNAL_FUNCTION("CREATEDTIME"))>=TRUNC(SYSDATE#!-10) AND
TRUNC(INTERNAL_FUNCTION("CREATEDTIME"))<=TRUNC(SYSDATE#!))
table MData contains around 900,000 rows and table CDetail contains 23,000,000 rows.
Should I introduce any new index or any other way to optimize the above query.
Edit 3. IDX_CPCTYDTLTRNCCRTDTM is a functional index on trunc(CREATEDTIME)
Edit :1
explain plan :for full table scan using hint /+full(Cdetail)/
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 780K| 33M| | 160K (2)| 00:32:01 |
| 1 | HASH GROUP BY | | 780K| 33M| 42M| 160K (2)| 00:32:01 |
|* 2 | HASH JOIN | | 780K| 33M| 22M| 151K (2)| 00:30:15 |
| 3 | VIEW | VW_GBC_5 | 780K| 13M| | 138K (2)| 00:27:46 |
| 4 | HASH GROUP BY | | 780K| 14M| 230M| 138K (2)| 00:27:46 |
|* 5 | FILTER | | | | | | |
|* 6 | TABLE ACCESS FULL| CDetail | 7521K| 136M| | 120K (2)| 00:24:02 |
| 7 | TABLE ACCESS FULL | MData | 890K| 22M| | 9666 (1)| 00:01:56 |
---------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("ITEM_1"="MS"."MDataID_PK")
5 - filter(TRUNC(SYSDATE#!-10)<=TRUNC(SYSDATE#!))
6 - filter(TRUNC(INTERNAL_FUNCTION("CREATEDTIME"))>=TRUNC(SYSDATE#!-10) AND
TRUNC(INTERNAL_FUNCTION("CREATEDTIME"))<=TRUNC(SYSDATE#!))

Thank you for sharing an explain plan; thats a good start. The thing with an explain plan however is that it gives you estimates, not actuals. If you can, can you get a SQL Monitor report? This will show you the actual cardinality and show you where time is being spent in the query.
The date filter is expecting about 3M rows (ID's 6 an 7)? Is that accurate?
What is the definition of the IDX_CPCTYDTLTRNCCRTDTM index? Does it happen to be function based?
Just to validate my thinking, can you add the following hint, run the query and get the explain plan again.
select /*+ full( cap ) */ ...

Related

Explanation of Oracle PARTITION BY vs GROUP BY for similar results

I have the following table (Marks):
firstname lastname Mark
------------------------------
arun prasanth 40
ann antony 45
sruthy abc 41
new abc 47
arun prasanth 45
arun prasanth 49
ann antony 49
And would like to add a column that tags if a record with specific columns occurs more than once. This is the result:
firstname lastname Mark MULTI_FLAG
----------------------------------------------
arun prasanth 40 1
ann antony 45 1
sruthy abc 41 0
new abc 47 0
arun prasanth 45 1
arun prasanth 49 1
ann antony 49 1
I can get the result with the following GROUP BY query:
SELECT M1.firstname
,M1.lastname
,M1.Mark
,M2.MULTI_COUNT
FROM Marks M1
JOIN (SELECT firstname, lastname, CASE WHEN COUNT (*) > 1 THEN 1 ELSE 0 END AS MULTI_COUNT
FROM Marks
GROUP BY firstname, lastname) M2
ON M2.firstname = M1.firstname AND M2.lastname = M1.lastname;
Or by this much prettier PARTITION BY query:
SELECT
firstname,
lastname,
CASE WHEN COUNT(*) OVER (PARTITION BY
firstname,
lastname) > 1 THEN 1 ELSE 0 END AS MULTI_FLAG
FROM
Marks
Running the GROUP BY query on a similar large table returned in:
34 m 56 s 595 ms
Running the PARTITION BY query on a similar large table returned in:
First run: 55 m 47 s 851 ms
Second run: 36 m 46 s 95 ms
I would be interested in knowing:
The best way to achieve my results
What accounts for the performance difference.
EDIT: How to read the query plan.
EDIT:
Oracle Version
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
"CORE 11.2.0.3.0 Production"
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
PARTITION BY Plan
PLAN_TABLE_OUTPUT
Plan hash value: 3822227444
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 668K| 90M| | 90429 (1)| 00:18:06 |
| 1 | WINDOW SORT | | 668K| 90M| 98M| 90429 (1)| 00:18:06 |
|* 2 | HASH JOIN RIGHT OUTER | | 668K| 90M| | 69340 (1)| 00:13:53 |
| 3 | TABLE ACCESS FULL | COUNTRY_REGION_MAPPINGS | 177 | 4779 | | 3 (0)| 00:00:01 |
| 4 | NESTED LOOPS | | | | | | |
| 5 | NESTED LOOPS | | 377K| 41M| | 69335 (1)| 00:13:53 |
| 6 | MAT_VIEW ACCESS FULL | PROJINFO_MAX_ITER_MVW | 17713 | 328K| | 782 (1)| 00:00:10 |
|* 7 | INDEX RANGE SCAN | Q_CLIN_ASSUM_BYCOUN_PK | 1 | | | 3 (0)| 00:00:01 |
| 8 | TABLE ACCESS BY INDEX ROWID| Q_CLINICAL_ASSUM_BYCOUNTRY | 21 | 2016 | | 4 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(UPPER("CRM"."COUNTRY"(+))=UPPER("QCAB"."TRIAL_COUNTRY"))
7 - access("PMIM"."OPPORTUNITYNUM"="QCAB"."OPPORTUNITYNUM" AND "PMIM"."CONTRACTNUM"="QCAB"."CONTRACTNUM"
AND "PMIM"."ITERATION"="QCAB"."ITERATION")
filter(UPPER("QCAB"."SHEET_LOC") LIKE '%COUNTRY ASSUMPTIONS%' OR UPPER("QCAB"."SHEET_LOC") LIKE
'INPUT%')
GROUP BY Plan
PLAN_TABLE_OUTPUT
Plan hash value: 648231064
------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 912 | 2052K| | 226K (1)| 00:45:22 |
|* 1 | HASH JOIN | | 912 | 2052K| | 226K (1)| 00:45:22 |
| 2 | TABLE ACCESS FULL | COUNTRY_REGION_MAPPINGS | 177 | 4779 | | 3 (0)| 00:00:01 |
|* 3 | HASH JOIN | | 89667 | 194M| 45M| 226K (1)| 00:45:22 |
| 4 | NESTED LOOPS | | | | | | |
| 5 | NESTED LOOPS | | 377K| 41M| | 69335 (1)| 00:13:53 |
| 6 | MAT_VIEW ACCESS FULL | PROJINFO_MAX_ITER_MVW | 17713 | 328K| | 782 (1)| 00:00:10 |
|* 7 | INDEX RANGE SCAN | Q_CLIN_ASSUM_BYCOUN_PK | 1 | | | 3 (0)| 00:00:01 |
| 8 | TABLE ACCESS BY INDEX ROWID | Q_CLINICAL_ASSUM_BYCOUNTRY | 21 | 2016 | | 4 (0)| 00:00:01 |
| 9 | VIEW | | 668K| 1377M| | 86518 (1)| 00:17:19 |
| 10 | HASH GROUP BY | | 668K| 72M| 80M| 86518 (1)| 00:17:19 |
|* 11 | HASH JOIN RIGHT OUTER | | 668K| 72M| | 69340 (1)| 00:13:53 |
| 12 | TABLE ACCESS FULL | COUNTRY_REGION_MAPPINGS | 177 | 2478 | | 3 (0)| 00:00:01 |
| 13 | NESTED LOOPS | | | | | | |
| 14 | NESTED LOOPS | | 377K| 35M| | 69335 (1)| 00:13:53 |
| 15 | MAT_VIEW ACCESS FULL | PROJINFO_MAX_ITER_MVW | 17713 | 328K| | 782 (1)| 00:00:10 |
|* 16 | INDEX RANGE SCAN | Q_CLIN_ASSUM_BYCOUN_PK | 1 | | | 3 (0)| 00:00:01 |
| 17 | TABLE ACCESS BY INDEX ROWID| Q_CLINICAL_ASSUM_BYCOUNTRY | 21 | 1701 | | 4 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("R2"."TRIAL_COUNTRY_CD"="CRM"."COUNTRY_CD" AND
UPPER("CRM"."COUNTRY")=UPPER("QCAB"."TRIAL_COUNTRY"))
3 - access("R2"."OPPORTUNITYNUM"="QCAB"."OPPORTUNITYNUM" AND "R2"."ITERATION"="QCAB"."ITERATION" AND
"R2"."CONTRACTNUM"="QCAB"."CONTRACTNUM" AND "R2"."ASSUMPTION"="QCAB"."ASSUMPTION")
7 - access("PMIM"."OPPORTUNITYNUM"="QCAB"."OPPORTUNITYNUM" AND "PMIM"."CONTRACTNUM"="QCAB"."CONTRACTNUM" AND
"PMIM"."ITERATION"="QCAB"."ITERATION")
filter(UPPER("QCAB"."SHEET_LOC") LIKE '%COUNTRY ASSUMPTIONS%' OR UPPER("QCAB"."SHEET_LOC") LIKE 'INPUT%')
11 - access(UPPER("CRM"."COUNTRY"(+))=UPPER("QCAB"."TRIAL_COUNTRY"))
16 - access("PMIM"."OPPORTUNITYNUM"="QCAB"."OPPORTUNITYNUM" AND "PMIM"."CONTRACTNUM"="QCAB"."CONTRACTNUM" AND
"PMIM"."ITERATION"="QCAB"."ITERATION")
filter(UPPER("QCAB"."SHEET_LOC") LIKE '%COUNTRY ASSUMPTIONS%' OR UPPER("QCAB"."SHEET_LOC") LIKE 'INPUT%')
Typically you start with the analytic function count(*) which leads to a compact SQL.
The drawback of this aproach is that the data must be sorted (see WINDOW SORT operation). The GROUP BY approach avoids
the sorting as HASH GROUP BY may be used, which can lead to a better performance.
Your example is a bit more involved, as you do not use table but a view that joins three tables - this join is performed twice, for the GROUP BY and for the detail data; which
is of course not optimal.
So I'll start with the analytic function version of the query (possible with a PARALLELoption).
If you want to try the GROUP BY a lightway version is possible:
1) group only the duplicated keys
2) make OUTER JOIN to assign the MULTI_FLAG
example with execution plan below - simple test with your data
with dups as (
select firstname,lastname from tmp
group by firstname,lastname
having count(*) > 1)
select tmp.FIRSTNAME, tmp.LASTNAME, tmp.MARK,
case when dups.firstname is not NULL then 1 else 0 end as MULTI_FLAG
from tmp
left outer join dups on tmp.firstname = dups.firstname and tmp.lastname = dups.lastname;
You still need to access your view twice, but the final join will be faster (espetially if you have only small number of duplicated keys).
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 105K| 26M| | 1673 (1)| 00:00:21 |
|* 1 | HASH JOIN RIGHT OUTER| | 105K| 26M| 11M| 1673 (1)| 00:00:21 |
| 2 | VIEW | | 105K| 10M| | 128 (4)| 00:00:02 |
|* 3 | FILTER | | | | | | |
| 4 | HASH GROUP BY | | 105K| 10M| | 128 (4)| 00:00:02 |
| 5 | TABLE ACCESS FULL| TMP | 105K| 10M| | 125 (1)| 00:00:02 |
| 6 | TABLE ACCESS FULL | TMP | 105K| 15M| | 125 (1)| 00:00:02 |
--------------------------------------------------------------------------------------

how to speed up the order by query in oracle

my below pagination query runs faster (2.5 sec) without order by .
if I use order by it get slower (180 sec).
Total number of record is only 90000
select * from(
select i.*,rownum rno from (
select opp.updat,nvl(s.name,c.vemail),s.name,c.vemail
from sfa_opportunities opp,sfa_company s, customer c
where opp.companyid = c.companyid(+)
and opp.custid = c.custid(+)
and opp.companyid = s.companyid(+)
and opp.sfacompid = s.sfacompid(+)
order by 2 asc, 1 asc
)i) where rno >= 1 and rno <= 30
I have given the explain plan below for reference.
---------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 97980 | 110M| | 14137 (1)| 00:03:18 |
|* 1 | VIEW | | 97980 | 110M| | 14137 (1)| 00:03:18 |
| 2 | COUNT | | | | | | |
| 3 | VIEW | | 97980 | 109M| | 14137 (1)| 00:03:18 |
| 4 | SORT ORDER BY | | 97980 | 6602K| 15M| 14137 (1)| 00:03:18 |
| 5 | NESTED LOOPS OUTER | | 97980 | 6602K| | 13137 (1)| 00:03:04 |
|* 6 | HASH JOIN RIGHT OUTER | | 97980 | 3635K| 1136K| 614 (1)| 00:00:09 |
| 7 | TABLE ACCESS FULL | SFA_COMPANY | 34851 | 714K| | 58 (0)| 00:00:01 |
| 8 | TABLE ACCESS FULL | SFA_OPPORTUNITIES | 97980 | 1626K| | 390 (1)| 00:00:06 |
|* 9 | TABLE ACCESS BY INDEX ROWID| CUSTOMER | 1 | 31 | | 1 (0)| 00:00:01 |
|* 10 | INDEX UNIQUE SCAN | PK_CUSTOMER_CUSTID | 1 | | | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("RNO"<=30 AND "RNO">=1)
6 - access("OPP"."COMPANYID"="S"."COMPANYID"(+) AND "OPP"."SFACOMPID"="S"."SFACOMPID"(+))
9 - filter("OPP"."COMPANYID"="C"."COMPANYID"(+))
10 - access("OPP"."CUSTID"="C"."CUSTID"(+))
You're sorting on nvl(s.name,c.vemail), then opp.updat. My guess is the NVL may prevent a lot of optimization, because Oracle can't tell what the value of that column is going to be without looking at every row in the joined result. You could try adding indexes on those three columns or a function based index on nvl(s.name,c.vemail).

Optimize TO_TIMESTAMP() query within Oracle

every time I execute this query it takes like 2 minutes to execute:
select * from CPOB_Monitoring_Dashboard
where VOYAGE_STRT_DT >= TO_TIMESTAMP('2014-07-03 00:00:00.000','YYYY-MM-DD HH24:MI:SS.FF')
and VOYAGE_STRT_DT <= TO_TIMESTAMP('2018-07-03 00:00:00.000','YYYY-MM-DD HH24:MI:SS.FF')
However if I change it to use TO_DATE instead of TO_TIMESTAMP is really fast.
Linq is generating the query using TOTIMESTAMP and I've not found yet a way to change that to use TO_DATE, is there any way that I can optimize the TOTIMESTAMP query??
Here is the Execution Plan for the query using TOTIMESTAMP:
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 246273147
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 21842 | 4820K| | 1336 (1)| 00:00:17 |
| 1 | VIEW | CPOB_Monitoring_Dashboard| 21842 | 4820K| | 1336 (1)| 00:00:17 |
| 2 | HASH UNIQUE | | 21842 | 3988K| 4384K| 1336 (1)| 00:00:17 |
| 3 | NESTED LOOPS | | 21842 | 3988K| | 442 (1)| 00:00:06 |
| 4 | NESTED LOOPS | | 47 | 7661 | | 160 (1)| 00:00:02 |
|* 5 | TABLE ACCESS FULL | VOYAGE_INFO | 46 | 1012 | | 68 (0)| 00:00:01 |
| 6 | TABLE ACCESS BY INDEX ROWID| PROCESS_CTRL | 1 | 141 | | 2 (0)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | VOYAGE_ID_IDX | 1 | | | 1 (0)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | PLY_IDX2 | 467 | 11208 | | 6 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
5 - filter(INTERNAL_FUNCTION("CPVI"."VOYAGE_STRT_DT")>=TIMESTAMP' 2014-07-03 00:00:00.000000000' AND
INTERNAL_FUNCTION("CPVI"."VOYAGE_STRT_DT")<=TIMESTAMP' 2018-07-03 00:00:00.000000000')
7 - access("CPC"."VOYAGE_ID"="CPVI"."VOYAGE_ID")
8 - access("CPC"."BRAND_NAME"="CPOB"."BRAND_ID" AND "CPC"."SHIP_NAME"=""SHIP_NAME")
filter("CPC"."SHIP_NAME"="CPOB"."SHIP_NAME")
24 rows selected.

Oracle LEFT JOIN View performance

I have two tables, they aren't large tables.
I have created a View based on this tables
select
tab_a.id as id,
tab_a.name as name
from tableA as tab_a
UNION ALL
select
tab_b.id as id,
tab_b.name as name
from tableB as tab_b
After all, I have a third table, lets call it tableMain with fields:
tableMain.id, tableMain.status, tableMain.viewId
viewId exists to join view
Final select look like
SELECT tableMain.id
FROM tableMain
LEFT OUTER JOIN VIEW ON tableMain.viewId=view.id
and join is very slow on a VIEW.
its fast if I join directly tableA or tableB, but not when using view.
It could be fast if I use view.name in select
SELECT tableMain.id, VIEW.name
FROM tableMain
LEFT OUTER JOIN VIEW ON tableMain.viewId=view.id
Not sure why VIEW JOIN working fast if I use VIEW field in select,
and how make VIEW JOIN fast without it.
Posting plans:
Good Plan (using VIEW.name in SELECT)
SELECT tableMain.id, VIEW.name
FROM tableMain
LEFT OUTER JOIN VIEW ON tableMain.viewId=view.id
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 220K| 440M| 50 (4)| 00:00:01 |
|* 1 | HASH JOIN OUTER | | 220K| 440M| 50 (4)| 00:00:01 |
| 2 | TABLE ACCESS FULL | **tableMain** | 19796 | 1527K| 42 (0)| 00:00:01 |
| 3 | VIEW | ***VIEW*** | 1115 | 2194K| 6 (0)| 00:00:01 |
| 4 | UNION-ALL | | | | | |
| 5 | TABLE ACCESS FULL| **tableA** | 818 | 1609K| 3 (0)| 00:00:01 |
|* 6 | TABLE ACCESS FULL| **tableB** | 297 | 5346 | 3 (0)| 00:00:01 |
Bad Plan (no view.name in select)
SELECT tableMain.id
FROM tableMain
LEFT OUTER JOIN VIEW ON tableMain.viewId=view.id
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 220K| 19M| 51 (6)| 00:00:01 | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10003 | 220K| 19M| 51 (6)| 00:00:01 | Q1,03 | P->S | QC (RAND) |
|* 3 | HASH JOIN RIGHT OUTER | | 220K| 19M| 51 (6)| 00:00:01 | Q1,03 | PCWP | |
| 4 | PX RECEIVE | | 1115 | 14495 | 6 (0)| 00:00:01 | Q1,03 | PCWP | |
| 5 | PX SEND HASH | :TQ10002 | 1115 | 14495 | 6 (0)| 00:00:01 | Q1,02 | P->P | HASH |
| 6 | BUFFER SORT | | 220K| 19M| | | Q1,02 | PCWP | |
| 7 | VIEW | ***VIEW*** | 1115 | 14495 | 6 (0)| 00:00:01 | Q1,02 | PCWP | |
| 8 | UNION-ALL | | | | | | Q1,02 | PCWP | |
| 9 | PX BLOCK ITERATOR | | 818 | 10634 | 3 (0)| 00:00:01 | Q1,02 | PCWC | |
| 10 | INDEX FAST FULL SCAN| ***tableA_PK*** | 818 | 10634 | 3 (0)| 00:00:01 | Q1,02 | PCWP | |
| 11 | BUFFER SORT | | | | | | Q1,02 | PCWC | |
| 12 | PX RECEIVE | | 297 | 2079 | 3 (0)| 00:00:01 | Q1,02 | PCWP | |
| 13 | PX SEND ROUND-ROBIN| :TQ10000 | 297 | 2079 | 3 (0)| 00:00:01 | | S->P | RND-ROBIN |
|* 14 | TABLE ACCESS FULL | **tableB** | 297 | 2079 | 3 (0)| 00:00:01 | | | |
| 15 | BUFFER SORT | | | | | | Q1,03 | PCWC | |
| 16 | PX RECEIVE | | 19796 | 1527K| 42 (0)| 00:00:01 | Q1,03 | PCWP | |
| 17 | PX SEND HASH | :TQ10001 | 19796 | 1527K| 42 (0)| 00:00:01 | | S->P | HASH |
| 18 | TABLE ACCESS FULL | **tableMain** | 19796 | 1527K| 42 (0)| 00:00:01 | | | |
Why so big difference?
Something is forcing parallelism. Does the view have any hints? Is there some type of plan management happening with this query? For example, is there an outline, SQL Plan Management, or profile setup only on the bad query? You may be able to find out by adding
the Note section of the explain plan. If I'm right, there will be something like this in only one of the execution plans:
Note
-----
- SQL plan baseline "SQL_PLAN_01yu884fpund494ecae5c" used FOR this statement
Also it would help to define "very slow". If the good query runs in 0.01 seconds and the bad query runs in 2 seconds, the difference may be all because of the overhead of
parallelism. But if the query was tuned for an environment with much larger data you may want to keep that the bad plan anyway - it may run better in production.

Optimize counts in SELECT with JOIN on Oracle

Hello all :) I have two tables that are about 30 millions rows each, and I'm seeking to improve performance when counts are performed.
Here is the query:
SELECT count(*)
FROM VEHICULE v
JOIN CLIENT c ON c.CL_ID = v.VE_CL_ID
WHERE v.VE_BRAND = 'MITSUBISHI'
AND c.CL_COUNTRY = 'SPAIN';
The foreign key is declared in the VEHICULE table
CONSTRAINT "VEHICULE_CLIENT_FK" FOREIGN KEY ("VE_CL_ID")
REFERENCES "MY_SCHEMA"."CLIENT" ("CL_ID") ENABLE
And there is an index on the foreign key:
CREATE INDEX "MY_SCHEMA"."VEHICULE_INDEX_CLIENT" ON "MY_SCHEMA"."VEHICULE" ("CL_ID")
There are indexes also on the columns used for the search criteria.
The requests can take up to 40 seconds. I have looked at bitmap joins indexes but I don't know if it will help, as bitmap joins are supposed to be for columns with low cardinalities. Is this the only type of index for joins? I'm totally at a loss at how I can improve the performance.
EDIT:
Here is what the SQL tuning advisor of SQL developer displays (execution plan)
The sql for this query is without AND c.CL_COUNTRY = 'SPAIN'
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name : staName9168
Tuning Task Owner : USER
Tuning Task ID : 12125
Scope : COMPREHENSIVE
Time Limit(seconds): 1800
Completion Status : COMPLETED
Started at : 04/23/2013 15:44:35
Completed at : 04/23/2013 15:44:36
-------------------------------------------------------------------------------
There are no recommendations to improve the statement.
-------------------------------------------------------------------------------
EXPLAIN PLANS SECTION
-------------------------------------------------------------------------------
1- Original
-----------
Plan hash value: 3808155432
------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 21 | 54011 (1)| 00:10:49 | | | |
| 1 | SORT AGGREGATE | | 1 | 21 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10001 | 1 | 21 | | | Q1,01 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 21 | | | Q1,01 | PCWP | |
|* 5 | HASH JOIN | | 475K| 9745K| 54011 (1)| 00:10:49 | Q1,01 | PCWP | |
| 6 | BUFFER SORT | | | | | | Q1,01 | PCWC | |
| 7 | PX RECEIVE | | 475K| 6497K| 32813 (1)| 00:06:34 | Q1,01 | PCWP | |
| 8 | PX SEND BROADCAST | :TQ10000 | 475K| 6497K| 32813 (1)| 00:06:34 | | S->P | BROADCAST |
|* 9 | TABLE ACCESS BY INDEX ROWID| VEHICULE | 475K| 6497K| 32813 (1)| 00:06:34 | | | |
|* 10 | INDEX RANGE SCAN | VEHICULE_INDEX_BRAND | 616K| | 1621 (2)| 00:00:20 | | | |
| 11 | PX BLOCK ITERATOR | | 20M| 138M| 21146 (1)| 00:04:14 | Q1,01 | PCWC | |
| 12 | TABLE ACCESS FULL | CLIENT | 20M| 138M| 21146 (1)| 00:04:14 | Q1,01 | PCWP | |
------------------------------------------------------------------------------------------------------------------------------------------
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
1 - SEL$58A6D7F6
9 - SEL$58A6D7F6 / VEHICULE#SEL$1
10 - SEL$58A6D7F6 / VEHICULE#SEL$1
12 - SEL$58A6D7F6 / CLIENT#SEL$1
Predicate Information (identified by operation id):
---------------------------------------------------
5 - access("VE_CL_ID"="CL_ID")
9 - filter("VE_CL_ID" IS NOT NULL)
10 - access("VEHICULE"."VE_BRAND"='MITSUBISHI')
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - (#keys=0) COUNT()[22]
2 - SYS_OP_MSR()[10]
3 - (#keys=0) SYS_OP_MSR()[10]
4 - (#keys=0) SYS_OP_MSR()[10]
5 - (#keys=1)
6 - (#keys=0) "VE_CL_ID"[NUMBER,22]
7 - "VE_CL_ID"[NUMBER,22]
8 - (#keys=0) "VE_CL_ID"[NUMBER,22]
9 - "VE_CL_ID"[NUMBER,22]
10 - "VEHICULE".ROWID[ROWID,10]
11 - "CL_ID"[NUMBER,22]
12 - "CL_ID"[NUMBER,22]
-------------------------------------------------------------------------------
Create composite indexes on client (cl_country, cl_id) and vehicule (ve_brand, ve_cl_id) (both in this order).
This way you could get rid of table access on both tables.
If you have but a few countries and brands possible you could also partition the indexes by country and brand so that INDEX FAST FULL SCAN could be used instead of INDEX RANGE SCAN.
You could also consider creating a cluster on client.id which would make the vehicle and client data to be stored in same or nearby data blocks, thus improving I/O.

Resources