is there a tricky way to optimize this query - oracle

I'm working on a table that has 3008698 rows
exam_date is a DATE field.
But queries I run want to match only the month part. So what I do is:
select * from my_big_table where to_number(to_char(exam_date, 'MM')) = 5;
which I believe takes long because of function on the column. Is there a way to avoid this and make it faster? other than making changes to the table? exam_date in the table have different date values. like 01-OCT-10 or 12-OCT-10...and so on

I don't know Oracle, but what about doing
WHERE exam_date BETWEEN first_of_month AND last_of_month
where the two dates are constant expressions.

select * from my_big_table where MONTH(exam_date) = 5
oops.. Oracle huh?..
select * from my_big_table where EXTRACT(MONTH from exam_date) = 5

Bear in mind that since you want approximately 1/12th of all the data, it may well be more efficient for Oracle to perform a full table scan anyway. This may explain why performance was worse when you followed harpo's advice.
Why? Suppose your data is such that 20 rows fit on each database block (on average), so that you have a total of 3,000,000/20 = 150,000 blocks. That means a full table scan will require 150,000 block reads. Now about 1/12th of the 3,000,000 rows will be for month 05. 3,000,000/12 is 250,000. So that's 250,000 table reads if you use the index - and that's ignoring the index reads that will also be required. So in this example the full table scan does a lot less work than the indexed search.

Bear in miond that there are only twelve distinct values for MONTH. So unless you have a strongly clustered set of records (say if you use partitioining) it is possible that using an index is not necessarily the most efficient way of querying in this fashion.
I didn't find that using EXTRACT() lead the optimizer to use a regular index on my date column but YMMV:
SQL> create index big_d_idx on big_table(col3) compute statistics
2 /
Index created.
SQL> set autotrace traceonly explain
SQL> select * from big_table
2 where extract(MONTH from col3) = 'MAY'
3 /
Execution Plan
----------------------------------------------------------
Plan hash value: 3993303771
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 23403 | 1028K| 4351 (3)| 00:00:53 |
|* 1 | TABLE ACCESS FULL| BIG_TABLE | 23403 | 1028K| 4351 (3)| 00:00:53 |
-------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(EXTRACT(MONTH FROM INTERNAL_FUNCTION("COL3"))=TO_NUMBER('M
AY'))
SQL>
What definitely can persuade the optimizer to use an index in these scenarios is building a function-based index:
SQL> create index big_mon_fbidx on big_table(extract(month from col3))
2 /
Index created.
SQL> select * from big_table
2 where extract(MONTH from col3) = 'MAY'
3 /
Execution Plan
----------------------------------------------------------
Plan hash value: 225326446
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 23403 | 1028K| 475 (0)|00:00:06|
| 1 | TABLE ACCESS BY INDEX ROWID| BIG_TABLE | 23403 | 1028K| 475 (0)|00:00:06|
|* 2 | INDEX RANGE SCAN | BIG_MON_FBIDX | 9361 | | 382 (0)|00:00:05|
-------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(EXTRACT(MONTH FROM INTERNAL_FUNCTION("COL3"))=TO_NUMBER('MAY'))
SQL>

The function call means that Oracle won't be able to use any index that might be defined on the column.
Either remove the function call (as in harpo's answer) or use a function based index.

Related

Understanding characteristics of a query for which an index makes a dramatic difference

I am trying to come up with an example showing that indexes can have a dramatic (orders of magnitude) effect on query execution time. After hours of trial and error I am still at square one. Namely, the speed-up is not large even when the execution plan shows using the index.
Since I realized that I better have a large table for the index to make a difference, I wrote the following script (using Oracle 11g Express):
CREATE TABLE many_students (
student_id NUMBER(11),
city VARCHAR(20)
);
DECLARE
nStudents NUMBER := 1000000;
nCities NUMBER := 10000;
curCity VARCHAR(20);
BEGIN
FOR i IN 1 .. nStudents LOOP
curCity := ROUND(DBMS_RANDOM.VALUE()*nCities, 0) || ' City';
INSERT INTO many_students
VALUES (i, curCity);
END LOOP;
COMMIT;
END;
I then tried quite a few queries, such as:
select count(*)
from many_students M
where M.city = '5467 City';
and
select count(*)
from many_students M1
join many_students M2 using(city);
and a few other ones.
I have seen this post and think that my queries satisfy the requirements stated in the replies there. However, none of the queries I tried showed dramatic improvement after building an index: create index myindex on many_students(city);
Am I missing some characteristic that distinguishes a query for which an index makes a dramatic difference? What is it?
The test case is a good start but it needs a few more things to get a noticeable performance difference:
Realistic data sizes. One million rows of two small values is a small table. With a table that small the performance difference between a good and a bad execution plan may not matter much.
The below script will double the table size until it gets to 64 million rows. It takes about 20 minutes on my machine. (To make it go quicker, for larger sizes, you could make the table nologging and add an /*+ append */ hint to the insert.
--Increase the table to 64 million rows. This took 20 minutes on my machine.
insert into many_students select * from many_students;
insert into many_students select * from many_students;
insert into many_students select * from many_students;
insert into many_students select * from many_students;
insert into many_students select * from many_students;
insert into many_students select * from many_students;
commit;
--The table has about 1.375GB of data. The actual size will vary.
select bytes/1024/1024/1024 gb from dba_segments where segment_name = 'MANY_STUDENTS';
Gather statistics. Always gather statistics after large table changes. The optimizer cannot do its job well unless it has table, column, and index statistics.
begin
dbms_stats.gather_table_stats(user, 'MANY_STUDENTS');
end;
/
Use hints to force a good and bad plan. Optimizer hints should usually be avoided. But to quickly compare different plans they can be helpful to fix a bad plan.
For example, this will force a full table scan:
select /*+ full(M) */ count(*) from many_students M where M.city = '5467 City';
But you'll also want to verify the execution plan:
explain plan for select /*+ full(M) */ count(*) from many_students M where M.city = '5467 City';
select * from table(dbms_xplan.display);
Flush the cache. Caching is probably the main culprit behind the index and full table scan queries taking the same amount of time. If the table fits entirely in memory then the time to read all the rows may be almost too small to measure. The number could be dwarfed by the time to parse the query or to send a simple result across the network.
This command will force Oracle to remove almost everything from the buffer cache. This will help you test a "cold" system. (You probably do not want to run this statement on a production system.)
alter system flush buffer_cache;
However, that won't flush the operating system or SAN cache. And maybe the table really would fit in memory on production. If you need to test a fast query it may be necessary to put it in a PL/SQL loop.
Multiple, alternating runs. There many things happening in the background, like caching and other processes. It's so easy to get bad results because something unrelated changed on the system.
Maybe the first run takes extra long to put things in a cache. Or maybe some huge job was started between queries. To avoid those issues, alternate running the two queries. Run them five times, throw out the highs and lows, and compare the averages.
For example, copy and paste the statements below five times and run them. (If using SQL*Plus, run set timing on first.) I already did that and posted the times I got in a comment before each line.
--Seconds: 0.02, 0.02, 0.03, 0.234, 0.02
alter system flush buffer_cache;
select count(*) from many_students M where M.city = '5467 City';
--Seconds: 4.07, 4.21, 4.35, 3.629, 3.54
alter system flush buffer_cache;
select /*+ full(M) */ count(*) from many_students M where M.city = '5467 City';
Testing is hard. Putting together decent performance tests is difficult. The above rules are only a start.
This might seem like overkill at first. But it's a complex topic. And I've seen so many people, including myself, waste a lot of time "tuning" something based on a bad test. Better to spend the extra time now and get the right answer.
An index really shines when the database doesn't need to go to every row in a table to get your results. So COUNT(*) isn't the best example. Take this for example:
alter session set statistics_level = 'ALL';
create table mytable as select * from all_objects;
select * from mytable where owner = 'SYS' and object_name = 'DUAL';
---------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 300 |00:00:00.01 | 12 |
| 1 | TABLE ACCESS FULL| MYTABLE | 1 | 19721 | 300 |00:00:00.01 | 12 |
---------------------------------------------------------------------------------------
So, here, the database does a full table scan (TABLE ACCESS FULL), which means it has to visit every row in the database, which means it has to load every block from disk. Lots of I/O. The optimizer guessed that it was going to find 15000 rows, but I know there's only one.
Compare that with this:
create index myindex on mytable( owner, object_name );
select * from mytable where owner = 'SYS' and object_name = 'JOB$';
select * from table( dbms_xplan.display_cursor( null, null, 'ALLSTATS LAST' ));
----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 3 | 2 |
| 1 | TABLE ACCESS BY INDEX ROWID| MYTABLE | 1 | 2 | 1 |00:00:00.01 | 3 | 2 |
|* 2 | INDEX RANGE SCAN | MYINDEX | 1 | 1 | 1 |00:00:00.01 | 2 | 2 |
----------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("OWNER"='SYS' AND "OBJECT_NAME"='JOB$')
Here, because there's an index, it does an INDEX RANGE SCAN to find the rowids for the table that match our criteria. Then, it goes to the table itself (TABLE ACCESS BY INDEX ROWID) and looks up only the rows we need and can do so efficiently because it has a rowid.
And even better, if you happen to be looking for something that is entirely in the index, the scan doesn't even have to go back to the base table. The index is enough:
select count(*) from mytable where owner = 'SYS';
select * from table( dbms_xplan.display_cursor( null, null, 'ALLSTATS LAST' ));
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 46 | 46 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 46 | 46 |
|* 2 | INDEX RANGE SCAN| MYINDEX | 1 | 8666 | 9294 |00:00:00.01 | 46 | 46 |
------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("OWNER"='SYS')
Because my query involved the owner column and that's contained in the index, it never needs to go back to the base table to look anything up there. So the index scan is enough, then it does an aggregation to count the rows. This scenario is a little less than perfect, because the index is on (owner, object_name) and not just owner, but its definitely better than doing a full table scan on the main table.

Oracle is not using the Indexes

I have a very large table in oracle 11g that has a very simple index in a char field (that is normally Y or N)
If I just execute the queue as bellow it takes around 10s to return
select QueueId, QueueSiteId, QueueData from queue where QueueProcessed = 'N'
However if I force it to use the index I create it takes 80ms
select /*+ INDEX(avaqueue QUEUEPROCESSED_IDX) */ QueueId, QueueSiteId, QueueData
from queue where QueueProcessed = 'N'
Also if I run under the explain plan for as bellow:
explain plan for select QueueId, QueueSiteId, QueueData
from queue where QueueProcessed = 'N'
and
explain plan for select /*+ INDEX(avaqueue QUEUEPROCESSED_IDX) */
QueueId, QueueSiteId, QueueData
from queue where QueueProcessed = 'N'
For the frist plan I got:
------------------------------------------------------------------------------
Plan hash value: 803924726
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 691K| 128M| 12643 (1)| 00:02:32 |
|* 1 | TABLE ACCESS FULL| AVAQUEUE | 691K| 128M| 12643 (1)| 00:02:32 |
------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("QUEUEPROCESSED"='N')
For the second pla I got:
Plan hash value: 2012309891
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 691K| 128M| 24386 (1)| 00:04:53 |
| 1 | TABLE ACCESS BY INDEX ROWID| AVAQUEUE | 691K| 128M| 24386 (1)| 00:04:53 |
|* 2 | INDEX RANGE SCAN | QUEUEPROCESSED_IDX | 691K| | 1297 (1)| 00:00:16 |
--------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("QUEUEPROCESSED"='N')
------------------------------------------------------------------------------
What proves that if I don't explicit tell oracle to use the index it does not use it, my question is why is oracle not using this index? Oracle is normally smart enough to make decisions 10 times better than me, that is the first time I actually have to force oracle to use a index and I am not very comfortable with it.
Does anyone have a good explanation for oracle decision to not use the index in this very explicit case?
The QueueProcessed column is probably missing a histogram so Oracle does not know the data is skewed.
If Oracle does not know the data is skewed it will assume the equality predicate, QueueProcessed = 'N', returns DBA_TABLES.NUM_ROWS /
DBA_TAB_COLUMNS.NUM_DISTINCT. The optimizer thinks the query returns half the rows in the table. Based on the 80ms return time the real number of rows returned is small.
Index range scans generally only work well when they select a small percentage of the rows. Index range scans read from a data structure one block at a time. And if the data is randomly distributed, it may need to read every block of data from the table anyway. For those reasons, if the query accesses a large portion of the table, it is more efficient to use a multi-block full table scan.
The bad cardinality estimate from the skewed data causes Oracle to think a full table scan is better. Creating a histogram will fix the issue.
Sample schema
Create a table, fill it with skewed data, and gather statistics the first time.
drop table queue;
create table queue(
queueid number,
queuesiteid number,
queuedata varchar2(4000),
queueprocessed varchar2(1)
);
create index QUEUEPROCESSED_IDX on queue(queueprocessed);
--Skewed data - only 100 of the 100000 rows are set to N.
insert into queue
select level, level, level, decode(mod(level, 1000), 0, 'N', 'Y')
from dual connect by level <= 100000;
begin
dbms_stats.gather_table_stats(user, 'QUEUE');
end;
/
The first execution will have the problem.
In this case the default statistics settings do not gather histograms the first time. The plan shows a full table scan and estimates Rows=50000, exactly half.
explain plan for
select QueueId, QueueSiteId, QueueData
from queue where QueueProcessed = 'N';
select * from table(dbms_xplan.display);
Plan hash value: 1157425618
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 50000 | 878K| 103 (1)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| QUEUE | 50000 | 878K| 103 (1)| 00:00:01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("QUEUEPROCESSED"='N')
Create a histogram
The default statistics settings are usually sufficient. Histogram may not be collected for several reasons. They may be manually disabled - check for the tasks, jobs or preferences set by the DBA.
Also, histograms are only automatically collected on columns that are both skewed and used. Gathering histograms can take time, there's no need to create the histogram on a column that is never used in a relevant predicate. Oracle tracks when a column is used and could benefit from a histogram, although that data is lost if the table is dropped.
Running a sample query and re-gathering statistics will make the histogram appear:
select QueueId, QueueSiteId, QueueData
from queue where QueueProcessed = 'N';
begin
dbms_stats.gather_table_stats(user, 'QUEUE');
end;
/
Now the Rows=100 and the Index is used.
explain plan for
select QueueId, QueueSiteId, QueueData
from queue where QueueProcessed = 'N';
select * from table(dbms_xplan.display);
Plan hash value: 2630796144
----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 100 | 1800 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| QUEUE | 100 | 1800 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | QUEUEPROCESSED_IDX | 100 | | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("QUEUEPROCESSED"='N')
Here's the histogram:
select column_name, histogram
from dba_tab_columns
where table_name = 'QUEUE'
order by column_name;
COLUMN_NAME HISTOGRAM
----------- ---------
QUEUEDATA NONE
QUEUEID NONE
QUEUEPROCESSED FREQUENCY
QUEUESITEID NONE
Create the histogram
Try to determine why the histogram was missing. Check that statistics are gathered with the defaults, there are no weird column or table preferences, and that table is not constantly dropped and re-loaded.
If you cannot rely on the default statistics job for your process you can manually gather histograms with the method_opt parameter like this:
begin
dbms_stats.gather_table_stats(user, 'QUEUE', method_opt=>'for columns size 254 queueprocessed');
end;
/
The answer - at least the first one that will just lead to more questions - is right there in the plans. The first plan has an estimated cost and estimated execution time about half that of the second plan. In the absence of the hint, Oracle is choosing the plan that it thinks will run faster.
So of course the next question is why is its estimate so far off in this case. Not only are the estimated times wrong relative to each other, both are much greater than what you actually experience when running the query.
The first thing I would look at is the estimated number of rows returned. The optimizer is guessing, in both cases, that there are about 691,000 rows in table matching your predicate. Is this close to the truth, or very far off? If it's far off, then refreshing statistics may be the right solution. Although if the column only has two possible values, I'd be kind of surprised if the existing stats are so off base.

Oracle linguistic index not used when SQL contains parameter with LIKE

My schema (simplified):
CREATE TABLE LOC
(
LOC_ID NUMBER(15,0) NOT NULL,
LOC_REF_NO VARCHAR2(100 CHAR) NOT NULL
)
/
CREATE INDEX LOC_REF_NO_IDX ON LOC
(
NLSSORT("LOC_REF_NO",'nls_sort=''BINARY_AI''') ASC
)
/
My query (in SQL*Plus):
ALTER SESSION SET NLS_COMP=LINGUISTIC NLS_SORT=BINARY_AI
/
VAR LOC_REF_NO VARCHAR2(50)
BEGIN
:LOC_REF_NO := 'SPDJ1501270';
END;
/
-- Causes full table scan (i.e, does not use LOC_REF_NO_IDX)
SELECT * FROM LOC WHERE LOC_REF_NO LIKE :LOC_REF_NO||'%';
-- Causes index scan (i.e. uses LOC_REF_NO_IDX)
SELECT * FROM LOC WHERE LOC_REF_NO LIKE 'SPDJ1501270%';
That the index is not used has been confirmed by doing an AUTOTRACE (EXPLAIN PLAN) and the SQL just runs slower. Tried a number of thing without success. Anyone got any idea what is going on? I am using Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit.
Update 1:
Note that the index is used when I use an equals with a parameter:
SELECT * FROM LOC WHERE LOC_REF_NO = :LOC_REF_NO;
Explain Plan:
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 93 | 5 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| LOC | 1 | 93 | 5 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | LOC_REF_NO_IDX | 1 | | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(NLSSORT("LOC_REF_NO",'nls_sort=''BINARY_AI''')=NLSSORT(:LOC_REF_NO,'nls_
sort=''BINARY_AI'''))
Whereas
SELECT * FROM LOC WHERE LOC_REF_NO LIKE :LOC_REF_NO||'%';
Explain Plan:
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 50068 | 3471K| 5724 (1)| 00:01:09 |
|* 1 | TABLE ACCESS FULL| LOC | 50068 | 3471K| 5724 (1)| 00:01:09 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("LOC_REF_NO" LIKE :LOC_REF_NO||'%')
Dumbfounded!
Update 2:
The reason we are using NLSSORT on an index is to make Oracle queries case insensitive and this was the general recommendation. Previously we use functional indexes with NLS_UPPER. The strange thing that is that the index is always used, parameter or not, as shown below.
So if table is as above, LOC_REF_NO_IDX index removed and this one added:
CREATE INDEX LOC_REF_NO_CI_IDX ON LOC
(
NLS_UPPER(LOC_REF_NO) ASC
)
/
The all of the following use the index:
ALTER SESSION SET NLS_COMP=BINARY NLS_SORT=BINARY;
SELECT * FROM LOC WHERE NLS_UPPER(LOC_REF_NO) LIKE :LOC_REF_NO||'%';
-------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 50068 | 5329K| 5700 (1)| 00:01:09 |
| 1 | TABLE ACCESS BY INDEX ROWID| LOC | 50068 | 5329K| 5700 (1)| 00:01:09 |
|* 2 | INDEX RANGE SCAN | LOC_REF_NO_CI_IDX | 9012 | | 43 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(NLS_UPPER("LOC_REF_NO") LIKE :LOC_REF_NO||'%')
filter(NLS_UPPER("LOC_REF_NO") LIKE :LOC_REF_NO||'%')
So for some reason when using LIKE with a parameter on a linguistic index, the Oracle optimizer is deciding not to use the index.
According to Oracle support note 1451804.1 this is a known limitation of using LIKE with NLSSORT-based indexes.
If you look at the execution plan for your fixed-value query you see something like:
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(NLSSORT("LOC_REF_NO",'nls_sort=''BINARY_AI''')>=HEXTORAW('7370646A313530
3132373000') AND NLSSORT("LOC_REF_NO",'nls_sort=''BINARY_AI''')<HEXTORAW('7370646A313
5303132373100') )
Those raw values evaluate to spdj1501270 and spdj1501271; those are derived from your constant string, and any values matching your like condition will be in that range. That parse-time transformation has to be based on a constant value, and doesn't work with a bind variable or an expression, presumably because it's evaluated too late.
See the note for more information, but there doesn't seem to be a workaround unfortunately. You might have to go back to your NLS_UPPER approach.
Previous explanation applies generally but not in this specific case, but kept for reference...
In general, with the fixed value the optimiser can estimate how selective your query is when it parses it, because it can know roughly what proportion of index values match that value. It may or may not use the index, depending on the actual value you use.
With the bind variable it comes up with a plan via bind variable peeking:
In bind variable peeking (also known as bind peeking), the optimizer looks at the value in a bind variable when the database performs a hard parse of a statement.
When a query uses literals, the optimizer can use the literal values to find the best plan. However, when a query uses bind variables, the optimizer must select the best plan without the presence of literals in the SQL text. This task can be extremely difficult. By peeking at bind values the optimizer can determine the selectivity of a WHERE clause condition as if literals had been used, thereby improving the plan.
It uses the statistics it has gathered to decide if any particular value is more likely than others. That probably isn't going to be the case here, especially with the like. It's falling back to a full table scan becuse it can't determine when it does the hard parse that the index will be more selective most of the time. Imagine, for example, that the parse decided to use the index, but then you supplied a bind value of just S, or even null - using the index would then do much more work than a full table scan.
Also worth noting:
When choosing a plan, the optimizer only peeks at the bind value during the hard parse. This plan may not be optimal for all possible values.
Adaptive cursor sharing can mitigate this, but this query may not qualify:
The criteria used by the optimizer to decide whether a cursor is bind-sensitive include the following:
The optimizer has peeked at the bind values to generate selectivity estimates.
A histogram exists on the column containing the bind value.
When I mocked this up with a small-ish amount of limited data, v$sql reported both is_bind_sensitive and is_bind_aware as 'N'.

Using function based index (oracle) to speed up count(X)

I've a table Film:
CREATE TABLE film (
film_id NUMBER(5) NOT NULL,
title varchar2(255));
And I wanted to make the query, which counts how many titles start with the same word and only displays ones with more than 20, faster using a function based index. The query:
SELECT FW_SEPARATOR.FIRST_WORD AS "First Word", COUNT(FW_SEPARATOR.FIRST_WORD) AS "Count"
FROM (SELECT regexp_replace(FILM.TITLE, '(\w+).*$','\1') AS FIRST_WORD FROM FILM) FW_SEPARATOR
GROUP BY FW_SEPARATOR.FIRST_WORD
HAVING COUNT(FW_SEPARATOR.FIRST_WORD) >= 20;
The thing is, I created this function based index:
CREATE INDEX FIRST_WORD_INDEX ON FILM(regexp_replace(TITLE, '(\w+).*$','\1'));
But it didn't speed anything up...
I was wondering if anyone could help me with this :)
Add a redundant predicate to the query to convince Oracle that the expression will not return null values and an index can be used:
select regexp_replace(film.title, '(\w+).*$','\1') first_word
from film
where regexp_replace(film.title, '(\w+).*$','\1') is not null;
Oracle can use an index like a skinny version of a table. Many queries only contain a small subset of the columns in a table. If all the columns in that set are part of the same index, Oracle can use that index instead of the table. This will be either an INDEX FAST FULL SCAN or an INDEX FULL SCAN. The data may be read similar to the way a regular table scan works. But since the index is much smaller than the table, that access method can be much faster.
But function-based indexes do not store NULLs. Oracle cannot use an index scan if it thinks there is a NULL that is not stored in the index. In this case, if the base column was defined as NOT NULL, the regular expression would always return a non-null value. But unsurprisingly, Oracle has not built code to determine whether or not a regular expression could return NULL. That sounds like an impossible task, similar to the halting problem.
There are several ways to convince Oracle that the expression is not null. The simplest may be to repeat the predicate and add an IS NOT NULL condition.
Sample Schema
create table film (
film_id number(5) not null,
title varchar2(255) not null);
insert into film select rownumber, column_value
from
(
select rownum rownumber, column_value from table(sys.odcivarchar2list(
q'<The Shawshank Redemption>',
q'<The Godfather>',
q'<The Godfather: Part II>',
q'<The Dark Knight>',
q'<Pulp Fiction>',
q'<The Good, the Bad and the Ugly>',
q'<Schindler's List>',
q'<12 Angry Men>',
q'<The Lord of the Rings: The Return of the King>',
q'<Fight Club>'))
);
create index film_idx1 on film(regexp_replace(title, '(\w+).*$','\1'));
begin
dbms_stats.gather_table_stats(user, 'FILM');
end;
/
Query that does not use index
Even with an index hint, the normal query will not use an index. Remember that hints are directives, and this query would use the index if it was possible.
explain plan for
select /*+ index_ffs(film) */ regexp_replace(title, '(\w+).*$','\1') first_word
from film;
select * from table(dbms_xplan.display);
Plan hash value: 1232367652
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 10 | 50 | 3 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| FILM | 10 | 50 | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------
Query that uses index
Now add the extra condition and the query will use the index. I'm not sure why it uses an INDEX FULL SCAN instead of an INDEX FAST FULL SCAN. With such small sample data it doesn't matter. The important point is that an index is used.
explain plan for
select regexp_replace(film.title, '(\w+).*$','\1') first_word
from film
where regexp_replace(film.title, '(\w+).*$','\1') is not null;
select * from table(dbms_xplan.display);
Plan hash value: 1151375616
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 10 | 50 | 1 (0)| 00:00:01 |
|* 1 | INDEX FULL SCAN | FILM_IDX1 | 10 | 50 | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter( REGEXP_REPLACE ("TITLE",'(\w+).*$','\1') IS NOT NULL)

query taking too long to execute from plsql block

This query when executed alone it takes 1 second to executed when the same query is executed through procedure it is taking 20 seconds, please help me on this
SELECT * FROM
(SELECT TAB1.*,ROWNUM ROWNUMM FROM
(SELECT wh.workitem_id, wh.workitem_priority, wh.workitem_type_id, wt.workitem_type_nm,
wh.workitem_status_id, ws.workitem_status_nm, wh.analyst_group_id,
ag.analyst_group_nm, wh.owner_uuid, earnings_estimate.pr_get_name_from_uuid(owner_uuid) owner_name,
wh.create_user_id, earnings_estimate.pr_get_name_from_uuid( wh.create_user_id) create_name, wh.create_ts,
wh.update_user_id,earnings_estimate.pr_get_name_from_uuid(wh.update_user_id) update_name, wh.update_ts, wh.bb_ticker_id, wh.node_id,
wh.eqcv_analyst_uuid, earnings_estimate.pr_get_name_from_uuid( wh.eqcv_analyst_uuid) eqcv_analyst_name,
WH.WORKITEM_NOTE,Wh.PACKAGE_ID ,Wh.COVERAGE_STATUS_NUM ,CS.COVERAGE_STATUS_CD ,Wh.COVERAGE_REC_NUM,I.INDUSTRY_CD INDUSTRY_CODE,I.INDUSTRY_NM
INDUSTRY_NAME,WOT.WORKITEM_OUTLIER_TYPE_NM as WORKITEM_SUBTYPE_NM
,count(1) over() AS total_count,bro.BB_ID BROKER_BB_ID,bro.BROKER_NM BROKER_NAME, wh.assigned_analyst_uuid,earnings_estimate.pr_get_name_from_uuid(wh.assigned_analyst_uuid)
assigned_analyst_name
FROM earnings_estimate.workitem_type wt,
earnings_estimate.workitem_status ws,
earnings_estimate.workitem_outlier_type wot,
(SELECT * FROM (
SELECT WH.ASSIGNED_ANALYST_UUID,WH.DEFERRED_TO_DT,WH.WORKITEM_NOTE,WH.UPDATE_USER_ID,EARNINGS_ESTIMATE.PR_GET_NAME_FROM_UUID(WH.UPDATE_USER_ID)
UPDATE_NAME, WH.UPDATE_TS,WH.OWNER_UUID, EARNINGS_ESTIMATE.PR_GET_NAME_FROM_UUID(OWNER_UUID)
OWNER_NAME,WH.ANALYST_GROUP_ID,WH.WORKITEM_STATUS_ID,WH.WORKITEM_PRIORITY,EARNINGS_ESTIMATE.PR_GET_NAME_FROM_UUID( WI.CREATE_USER_ID) CREATE_NAME, WI.CREATE_TS,
wi.create_user_id,wi.workitem_type_id,wi.workitem_id,RANK() OVER (PARTITION BY WH.WORKITEM_ID ORDER BY WH.CREATE_TS DESC NULLS LAST, ROWNUM) R,
wo.bb_ticker_id, wo.node_id,wo.eqcv_analyst_uuid,
WO.PACKAGE_ID ,WO.COVERAGE_STATUS_NUM ,WO.COVERAGE_REC_NUM,
wo.workitem_outlier_type_id
FROM earnings_estimate.workitem_history wh
JOIN EARNINGS_ESTIMATE.workitem_outlier wo
ON wh.workitem_id=wo.workitem_id
JOIN earnings_estimate.workitem wi
ON wi.workitem_id=wo.workitem_id
AND WI.WORKITEM_TYPE_ID=3
and wh.workitem_status_id not in (1,7)
WHERE ( wo.bb_ticker_id IN (SELECT
column_value from table(v_tickerlist) )
)
)wh
where r=1
AND DECODE(V_DATE_TYPE,'CreatedDate',WH.CREATE_TS,'LastModifiedDate',WH.UPDATE_TS) >= V_START_DATE
AND decode(v_date_type,'CreatedDate',wh.create_ts,'LastModifiedDate',wh.update_ts) <= v_end_date
and decode(wh.owner_uuid,null,-1,wh.owner_uuid)=decode(v_analyst_id,null,decode(wh.owner_uuid,null,-1,wh.owner_uuid),v_analyst_id)
) wh,
earnings_estimate.analyst_group ag,
earnings_estimate.coverage_status cs,
earnings_estimate.research_document rd,
( SELECT
BB.BB_ID ,
BRK.BROKER_ID,
BRK.BROKER_NM
FROM EARNINGS_ESTIMATE.BROKER BRK,COMMON.BB_ID BB
WHERE BRK.ORG_ID = BB.ORG_ID
AND BRK.ORG_LOC_REC_NUM = BB.ORG_LOC_REC_NUM
AND BRK.primary_broker_ind='Y') bro,
earnings_estimate.industry i
WHERE wh.analyst_group_id = ag.analyst_group_id
AND wh.workitem_status_id = ws.workitem_status_id
AND wh.workitem_type_id = wt.workitem_type_id
AND wh.coverage_status_num=cs.coverage_status_num
AND wh.workitem_outlier_type_id=wot.workitem_outlier_type_id
AND wh.PACKAGE_ID=rd.PACKAGE_ID(+)
AND rd.industry_id=i.industry_id(+)
AND rd.BROKER_BB_ID=bro.BB_ID(+)
ORDER BY wh.create_ts)tab1 )
;
I agree that the problem is most likely related to SELECT column_value from table(v_tickerlist).
By default, Oracle estimates that table functions return 8168 rows. Since you're testing the query with a single value, I assume that the actual number of values is usually much smaller. Cardinality estimates, like any forecast, are always wrong. But they should at least be in the ballpark of the actual cardinality for the optimizer to do its job properly.
You can force Oracle to always check the size with dynamic sampling. This will require more time to generate the plan, but it will probably be worth it in this case.
For example:
SQL> --Sample type
SQL> create or replace type v_tickerlist is table of number;
2 /
Type created.
SQL> --Show explain plans
SQL> set autotrace traceonly explain;
SQL> --Default estimate is poor. 8168 estimated, versus 3 actual.
SQL> SELECT column_value from table(v_tickerlist(1,2,3));
Execution Plan
----------------------------------------------------------
Plan hash value: 1748000095
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8168 | 16336 | 16 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 8168 | 16336 | 16 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------
SQL> --Estimate is perfect when dynamic sampling is used.
SQL> SELECT /*+ dynamic_sampling(tickerlist, 2) */ column_value
2 from table(v_tickerlist(1,2,3)) tickerlist;
Execution Plan
----------------------------------------------------------
Plan hash value: 1748000095
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3 | 6 | 6 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 3 | 6 | 6 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement (level=2)
SQL>
If that doesn't help, look at your explain plan (and post it here). Find where the cardinality estimate is most wrong, then try to figure out why that is.
Your query is too big and will take time when executing on bulk data. Try putting few de-normalised temp tables, extract the data there and then join between the temp tables. That will increase the performance.
With this stand alone query, do not pass any variable inside the subqueries as in the below line...
WHERE ( wo.bb_ticker_id IN (SELECT
column_value from table(v_tickerlist)
Also, the outer joins will toss the performance.. Better to implement the denormalised temp tables

Resources