When filtering a single table on only the primary key, why is the optimizer is doing a full table scan? - oracle

I have a monster sized non-partitioned table. I recently updated the statistics as on it as well.
The primary key is on a char field called "ID".
SELECT * FROM "MYDATA" WHERE "ID" = '0000492319'
The plan is saying TABLE ACCESS (FULL) and has a filter predicate for the the ID. This results in a query that takes 8 seconds to run.
If I give the optimizer a hint to use the primary key, the query takes 1.6 seconds to run.
Its bizarre to me that I should need to provide this hint. The indexed plan estimates a lower cost, and the optimizer should be aware of this.
Here is the filter predicate:
NLSSORT(INTERNAL_FUNCTION(ID),'nls_sort="JAPANESE_M"')=HEXTORAW('017...')
The database NLS_SORT is set to JAPANESE_M and the NLS_CHARAACTERSET is JA16SJIS.
So nothing seems to be mismatched that would cause a special sort function to be called. Its a bit odd though.
One more piece of information, if I select only the "ID" column in my query then the planer chooses INDEX (FAST FULL SCAN) automatically.
The problem only arises when I use select *.
Oracle Database Verion: 10.2.0.5.0.

Related

make the optimizer use all columns of an index

we have a few tables storing temporal data that have natural a primary key consisting of 3 columns. Example: maximum temperature for this day. This is the Composite Primary key index (in this order):
id number(10): the id of the timeserie.
day date: the day for which this data was reported
kill_at timestamp: the last timestamp before this data was deleted or updated.
Simplified logic: When we make a forecast at 10:00am, then the last entry found for this id/day combination has his create_at changed to 9:59am and the newly calculated value is stored with a kill_at timestamp of '31.12.2999'.
typical queries on this table are:
1) where id=? and day=? and kill_at=?
2) where id=? and day between (? and ?) and kill_at=?
3) where id=? and day between (? and ?)
4) where id=?
There are plenty of timeseries that we do not forecast. That means we get one valued when it's measured and it never changes. But there are some timeseries that we forecast 200-300 times. So for one id/day combination there are 200+ entries with different values for kill_at.
We currently only have the primary key (id, day, kill_at) as the only (unique) index on this table. But when I query with query 2 (exact id and day range), then the optimizer decides to only use the first column of the index.
ID OPERATION OPTIONS OBJECT_NAME OPTIMIZER SEARCH_COLUMNS
0 SELECT STATEMENT ALL_ROWS 0
1 FILTER 0
2 TABLE ACCESS BY INDEX ROWID DPD 0
3 INDEX RANGE SCAN DPD_PK 1
This really hurts us for those timeseries that have been updates 200+ times.
Now I was looking for a way to force the optimizer to use all 3 columns of our index, but I can't find a hint for that. Is there one?
Or are there any other suggestions on how to speed up my query? We try to reduce the peak durations. The average Durations are of lesser concern.
what confuses me:
The above execution plan is what I see in dba_hist_sql_plan. It is the only execution plan for this statement. But when I let my client show the explain plan, then it is sometimes a 1 or a 3 for search_columns. But it never is 3 for when our application runs this Statement.
we actually found the cause of this problem. We're using JPA/JDBC and the JDBC date types weren't modeled correctly. While the oracle date type is with second precision, somebody (I now hate him) made the "day" attribute in our entity of type java.sql.Timestamp (although it is only day without time).
The effect is that Oracle will need to cast (use a function on) each entry in the table to make it a Timestamp before it can compare with the Timestamp query parameter. That way the index cannot be used properly.

Recommended way to index a date field in postgres?

I have a few tables with about 17M rows that all have a date column I would like to be able to utilize frequently for searches. I am considering either just throwing an index on the column and see how things go or sorting the items by date as a one time operation and then inserting everything into a new table so that the primary key ascends as the date ascends.
Since these are both pretty time consuming I thought it might be worth it to ask here first for input.
The end goal is for me to load sql queries into pandas for some analysis if that is relevant here.
The index on a date column makes sense when you are going to search the table for a given date(s), e.g.:
select * from test
where the_date = '2016-01-01';
-- or
select * from test
where the_date between '2016-01-01' and '2016-01-31';
-- etc
In these queries there is no matter whether the sort order of primary key and the date column are the same or not. Hence rewriting the data to the new table will be useless. Just create an index.
However, if you are going to use the index only in ORDER BY:
select * from test
order by the_date;
then a primary key integer index may be significantly (2-4 times) faster then an index on a date column.
Postgres supports to some extend clustered indexes, which is what you suggest by removing and reinserting the data.
In fact, removing and reinserting the data in the order you want will not change the time the query takes. Postgres does not know the order of the data.
If you know that the table's data does not change. Then cluster the data based on the index you create.
This operation reorders the table based on the order in the index. It is very effective until you update the table. The syntax is:
CLUSTER tableName USING IndexName;
See the manual for details.
I also recommend you use
explain <query>;
to compare two queries, before and after an index. Or before and after clustering.

DB Index not being called

I know this question has been asked more than once here. But I am not able to resolve my issue so posting it again for help.
I have a table called Transaction in Oracle database (11g) with 2.7 million records. There is a not-null varchar2(20) (txn_id) column which contains numeric values. This is not the primary key of the table, and most of the values are unique. By most of the values I mean there are cases where one value can be there 3-4 times in the table.
If I perform a simple query of select based on TXN_ID it take about 5 seconds or more to return the result.
Select * from Transaction t where t.txn_id = 245643
I have an index created on this column, but when I check the explain plan for above query, it is using full table scan. This query is being used many times in the application which is making the application slow.
Can you please provide some help what might be causing this issue?
You are comparing a varchar column with a numeric literal (245643). This forces Oracle to convert one side of the equality, and off hand, it seems as though it's choosing the "wrong" side. Instead of having to guess how Oracle will handle this conversion, use a character literal:
SELECT * FROM Transaction t WHERE t.txn_id = '245643'

Best way to identify a handful of records expected to have a flag set to TRUE

I have a table that I expect to get 7 million records a month on a pretty wide table. A small portion of these records are expected to be flagged as "problem" records.
What is the best way to implement the table to locate these records in an efficient way?
I'm new to Oracle, but is a materialized view an valid option? Are there such things in Oracle such as indexed views or is this potentially really the same thing?
Most of the reporting is by month, so partitioning by month seems like an option, but a "problem" record may be lingering for several months theorectically. Otherwise, the reporting shuold be mostly for the current month. Would you expect that querying across all month partitions to locate any problem record would cause significant performance issues compared to usinga single table?
Your general thoughts of where to start would be appreciated. I realize I need to read up and I'll do that but I wanted to get the community thought first to make sure I read the right stuff.
One more thought: The primary key is a GUID varchar2(36). In order of magnitude, how much of a performance hit would you expect this to be relative to using a NUMBER data type PK? This worries me but it is out of my control.
It depends what you mean by "flagged", but it sounds to me like you would benefit from a simple index, function based index, or an indexed virtual column.
In all cases you should be careful to ensure that all the index columns are NULL for rows that do not need to be flagged. This way your index will contain only the rows that are flagged (Oracle does not - by default - index rows in B-Tree indexes where all index column values are NULL).
Your primary key being a VARCHAR2 GUID should make no difference, at least with regards to the specific flagging of rows in this question, indexes will point to rows via Oracle internal ROWIDs.
Indexes support partitioning, so if your data is already partitioned, your index could be set to match.
Simple column index method
If you can dictate how the flagging works, or the column already exists, then I would simply add an index to it like so:
CREATE INDEX my_table_problems_idx ON my_table (problem_flag)
/
Function-based index method
If the data model is fixed / there is no flag column, then you can create a function-based index assuming that you have all the information you need in the target table. For example:
CREATE INDEX my_table_problems_fnidx ON my_table (
CASE
WHEN amount > 100 THEN 'Y'
ELSE NULL
END
)
/
Now if you use the same logic in your SELECT statement, you should find that it uses the index to efficiently match rows.
SELECT *
FROM my_table
WHERE CASE
WHEN amount > 100 THEN 'Y'
ELSE NULL
END IS NOT NULL
/
This is a bit clunky though, and it requires you to use the same logic in queries as the index definition. Not great. You could use a view to mask this, but you're still duplicating logic in at least two places.
Indexed virtual column
In my opinion, this is the best way to do it if you are computing the value dynamically (available from 11g onwards):
ALTER TABLE my_table
ADD virtual_problem_flag VARCHAR2(1) AS (
CASE
WHEN amount > 100 THEN 'Y'
ELSE NULL
END
)
/
CREATE INDEX my_table_problems_idx ON my_table (virtual_problem_flag)
/
Now you can just query the virtual column as if it were a real column, i.e.
SELECT *
FROM my_table
WHERE virtual_problem_flag = 'Y'
/
This will use the index and puts the function-based logic into a single place.
Create a new table with just the pks of the problem rows.

Is an index clustered or unclustered in Oracle?

How can I determine if an Oracle index is clustered or unclustered?
I've done
select FIELD from TABLE where rownum <100
where FIELD is the field on which is built the index. I have ordered tuples, but the result is wrong because the index is unclustered.
By default all indexes in Oracle are unclustered. The only clustered indexes in Oracle are the Index-Organized tables (IOT) primary key indexes.
You can determine if a table is an IOT by looking at the IOT_TYPE column in the ALL_TABLES view (its primary key could be determined by querying the ALL_CONSTRAINTS and ALL_CONS_COLUMNS views).
Here are some reasons why your query might return ordered rows:
Your table is index-organized and FIELD is the leading part of its primary key.
Your table is heap-organized but the rows are by chance ordered by FIELD, this happens sometimes on an incrementing identity column.
Case 2 will return sorted rows only by chance. The order of the inserts is not guaranteed, furthermore Oracle is free to reuse old blocks if some happen to have available space in the future, disrupting the fragile ordering.
Case 1 will most of the time return ordered rows, however you shouldn't rely on it since the order of the rows returned depends upon the algorithm of the access path which may change in the future (or if you change DB parameter, especially parallelism).
In both case if you want ordered rows you should supply an ORDER BY clause:
SELECT field
FROM (SELECT field
FROM TABLE
ORDER BY field)
WHERE rownum <= 100;
There is no concept of a "clustered index" in Oracle as in SQL Server and Sybase. There is an Index-Organized Table, which is similar but not the same.
"Clustered" indices, as implemented in Sybase, MS SQL Server and possibly others, where rows are physically stored in the order of the indexed column(s) don't exist as such in Oracle. "Cluster" has a different meaning in Oracle, relating, I believe, to the way blocks and tables are organized.
Oracle does have "Index Organized Tables", which are physically equivalent, but they're used much less frequently because the query optimizer works differently.
The closest I can get to an answer to the identification question is to try something like this:
SELECT IOT_TYPE FROM user_tables
WHERE table_name = '<your table name>'
My 10g instance reports IOT or null accordingly.
Index Organized Tables have to be organized on the primary key. Where the primary key is a sequence generated value this is often useless or even counter-productive (because simultaneous inserts get into conflict for the same block).
Single table clusters can be used to group data with the same column value in the same database block(s). But they are not ordered.

Resources