I have an index for a table in oracle and I don't know it is being used or not and how it is used. Is there a way to log all statements that uses that index? If not is there a way to know if that index is being used or not?
Thanks,
Understanding the ultimate goal would definitely be helpful.
You can query the GV$SQL_PLAN table to see which queries in the shared pool are using a particular index
SELECT *
FROM gv$sql_plan
WHERE object_owner = <<owner of object>
AND object_name = <<name of index>>
Depending on your system, however, SQL statements may not stay in the shared pool particularly long so you may need to poll relatively frequently to ensure you don't miss anything. Depending on the Oracle version, edition, and licensed options, you may also be able to query the AWR tables to get historical plan information. But that will only have information on queries that were sufficiently expensive that they were captured in an AWR snapshot.
SELECT *
FROM dba_hist_sql_plan
WHERE object_owner = <<owner of object>
AND object_name = <<name of index>>
If what you are trying to accomplish is to figure out whether an index is being used, however, you probably want to use something like index monitoring and let Oracle track which indexes are being used. Be aware, however, that there are pitfalls to this approach. For example indexes on foreign keys that are necessary for efficient deletes may not be flagged as being used by index monitoring (nor will they be caught by just looking at query plans). Index monitoring may also miss cases where statistics on an index are used to come up with an efficient plan despite the index itself not being used.
Related
Am pretty much new to Database activity. We are using Oracle 19C and am having only SQL Developer with me. I have a task to find and remove unused indexes from all tables of a particular owner (say owner=QTSD). Am using below query:
select * from ALL_INDEXES where owner='QTSD';
From the result, am checking LAST_ANALYZED column and then which was analyzed last year those indexes I will drop it one by one. Is this the correct way to implement? Also does deleting the indexes need DB downtime as corresponding tables have ~100millions of data?
The fact that it was last analyzed a long time ago doesn't mean much. Why didn't you analyze it more often?
The optimizer decides whether to use and index or not. You can monitor its usage by e.g.
alter index my_index monitoring usage;
Do that index-by-index (if there are some really suspicious), or write a script which will write that for you:
select 'alter index ' || index_name || ' monitoring usage;' from user_indexes;
Copy/paste that script to execute them all.
Then wait some time (how long? I can't tell, might take a lot of time until many SQLs have been executed in the database so that you'd actually gather some statistics about the usage), and then query v$object_usage; pay attention to the USED column value.
After that, you'll be somewhat smarter (meaning: you won't be just lucky guessing whether some index was used or not) and, hopefully, be able to drop unused indexes.
It is better to use database index monitoring features in order to find unused index.
See this if you are on version 12.1 or lower. See here If your DB is 12.2 or newer
One of the job schedulers is running in the production environment on a daily basis which use to take only 20 mins based past execution history, but today it's been more than 2 hours still not completed.
a) How to check whether the SQL plan has changed today or not?
b) What could be the reasons for the plan change? One I know due to code change. What else could cause plan change?
You can check if the SQL execution plan has changed by using the Active Workload Repository (AWR). First, you need to find the SQL_ID for the relevant query. The view GV$SQL contains the most recent SQL. If you can't find the query in this view, try DBA_HIST_SQLTEXT instead.
select sql_id, sql_text
from gv$sql
where lower(sql_fulltext) like '%some unique string%';
With the SQL_ID, you can start investigating historical information. The table DBA_HIST_SQLSTAT contains lots of summary information about the SQL. The most important column is PLAN_HASH_VALUE; if that value changes, then the execution plan has changed.
select snap_id, sql_id, plan_hash_value, executions_delta, elapsed_time_delta/100000 seconds_delta
,dba_hist_sqlstat.*
from dba_hist_sqlstat
--join to dba_hist_snapshot if you want to find precise times instead of SNAP_IDs.
where sql_id = '&SQL_ID'
order by dba_hist_sqlstat.snap_id;
If the plan has changed, you can view both plans with this:
select * from table(dbms_xplan.display_awr(sql_id => '&SQL_ID'));
Unfortunately, the most difficult part of query tuning with Oracle is that there are a dozen different ways to view the execution plans, and each of them provides slightly different data.
This query only returns numbers for the last execution, but it returns actual numbers and times, which helps you focus on the specific operation and wait events that caused the problem.
select dbms_sqltune.report_sql_monitor(sql_id => '&SQL_ID', type => 'text') from dual;
This query returns some additional execution plan information, specifically the Note section. Most graphical IDEs leave out that section, but it's vital for complex troubleshooting. If something weird is going on, the Note section will often explain why.
select * from table(dbms_xplan.display_cursor(sql_id => '&SQL_ID'));
There are many reasons why execution plans can change. If you add additional information to the question I may be able to make an educated guess.
Quick Check :
Please check whether the Statistics, is upto Date, both System and Table statistics.
Pleae check if any changes to table or index made ?
I have the following query which monitors if anyone tried to logon with a technical users on database:
SELECT COUNT (OS_USERNAME)
FROM DBA_AUDIT_SESSION
WHERE USERNAME IN ('USER1','USER2','USER3')
AND TIMESTAMP>=SYSDATE - 10/(24*60)
AND RETURNCODE !='0'
Unfortunately the performance of this SQL is quite poor since it does TABLE ACCESS FULL on sys.aud$. I tried to narrow it with:
SELECT COUNT (sessionid)
FROM sys.aud$
WHERE userid IN ('USER1','USER2','USER3')
AND ntimestamp# >=SYSDATE - 10/(24*60)
AND RETURNCODE !='0'
and action# between 100 and 102;
And it is even worse. Is it possible at all to optimize that query by forcing oracle to use indexes here? I would be grateful for any help&tips.
SYS.AUD$ does not have any default indexes but it is possible to create one on ntimestamp#.
But proceed with caution. The support document "The Effect Of Creating Index On Table Sys.Aud$ (Doc ID 1329731.1)" includes this warning:
Creating additional indexes on SYS objects including table AUD$ is not supported.
Normally that would be the end of the conversation and you'd want to try another approach. But in this case there are a few reasons why it's worth a shot:
The document goes on to say that an index may be helpful, and to test it first.
It's just an index. The SYS schema is special, but we're still just talking about an index on a table. It could slow things down, or maybe cause space errors, like any index would. But I doubt there's any chance it could do something crazy like cause wrong results bugs.
It's somewhat common to change the tablespace of the audit trail, so that table isn't sacred.
I've seen indexes on it before. 2 of the 400 databases I manage have an index on the columns SESSIONID,SES$TID (although I don't know why). Those indexes have been there for years, have been through an upgrade and patches, and haven't caused problems as far as I know.
Creating an "unsupported" index may be a good option for you, if you're willing to test it and accept a small amount of risk.
Oracle 10g onwards optimizer would choose the best plan for your query, provided you write proper joins. Not sure how many recocds exists in your DBA_AUDIT_SESSION , but you can always use PARALLEL hints to somewhat speed up the execution.
SELECT /*+Parallel*/ COUNT (OS_USERNAME)
--select COUNT (OS_USERNAME)
FROM DBA_AUDIT_SESSION
WHERE USERNAME IN ('USER1','USER2','USER3')
AND TIMESTAMP>=SYSDATE - 10/(24*60)
AND RETURNCODE !='0'
Query Cost reduces to 3 than earlier.
NumRows: 8080019
So it is pretty large due to company regulations. Unfortunately using /*+Parallel*/ here makes it run longer, so the performance is still worse.
Any other suggestions?
We have a table called t_reading, with the following schema:
MEAS_ASS_ID NUMBER(12,0)
READ_DATE DATE
READ_TIME VARCHAR2(5 BYTE)
NUMERIC_VAL NUMBER
CHANGE_REASON VARCHAR2(240 BYTE)
OLD_IND NUMBER(1,0)
This table is indexed as follows:
CREATE INDEX RED_X4 ON T_READING
(
"OLD_IND",
"READ_DATE" DESC,
"MEAS_ASS_ID",
"READ_TIME"
)
This exact table (with the same data) exists on two servers, the only difference is the Oracle version installed on each one.
The query in question is:
SELECT * FROM t_reading WHERE OLD_IND = 0 AND MEAS_ASS_ID IN (5022, 5003) AND read_date BETWEEN to_date('30/10/2012', 'dd/mm/yyyy') AND to_date('31/10/2012', 'dd/mm/yyyy');
This query executes in less than a second on Oracle 10, and around a minute in Oracle 9.
Are we missing something?
EDIT:
Execution plan for Oracle 9:
Execution plan for Oracle 10:
"Are we missing something?"
Almost certainly, but it's difficult for us to tell you what.
There were some performance improvements in the CBO from 9i to 10g but it's unlikely to make that much difference. So it must be some variation in your systems, which is obviously the hardest thing for us to diagnose, blind and remote as we are.
So, first things to rule out are general system diffences - disk speeds, i/o bottlenecks, memory sizing, etc. You say you have two servers, do they have different specs? Whilst it will require assistence from an sysadmin type to investigate these things, we can discount them with a single question: is it just this query, or can you reproduce this effect with many different queries?
If is just the query, there are at least three possible explanations.
One is data distribution. How was the data populated in the two databases? If the 10g was exported from the 9i database was it sorted in some fashion? Even if it wasn't it is possible that the ETL process has compacted and organised the data and built the fresh indexes in a way which improves the access times.
Another is statistics. Are the 10g statistics fresh and realistic, whilst the 9i statistics are stale and misleading?
A third possibility is a stored execution plan. (You have posted a query with literals, this only applies to queries with bind variables.) Searches on date ranges are notoriously hard to tune. A date range of to_date('30/10/2012', 'dd/mm/yyyy') AND to_date('31/10/2012', 'dd/mm/yyyy') suits one sort of plan, whereas date range of to_date('01/01/2010', 'dd/mm/yyyy') AND to_date('31/10/2012', 'dd/mm/yyyy') may well suit a different approach. If the extant plan on the 9i database suits a broader range then queries for a narrow range may take a long time.
While I've been typing this you have published the explain plans. The killer detail is at the bottom of the 9i plan:
Note: rule-based optimization
You haven't got any stats for the table or the index, so the optimizer is applying the dumb defaults of the RBO. You should really address this, but it's not a simple task. You may need to gather stats for all your tables. You may need to change the OPTIMIZER_MODE in the init.ora file. You may need to undertake a regression test of all the queries on your database. So, it's not something you shoudl do lightly.
In the meantime, if this query is bugging you, you'll need to wrnagle the Rule-Based Optimizer the old-fashioned way. Find out more.
A couple of potential explanations:
You're range scanning different indexes.
Assuming that you've got the same index on your 10g table but you've
just called it a different thing the explain plans are different.
The main worry I would have is the lack of information in the rows, bytes and cost column of the explain plan on your 9i query. Oracle 9i does not collect statistics by default and this detail would indicate that you have not collected statistics on this table. Use dbms_stats to gather statistics on your table and the indexes. Specifically the procedure gather_table_stats:
BEGIN
DBMS_STATS.GATHER_TABLE_STATS (
ownname => user,
tabname => 'T_READING',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE
method_opt => 'FOR ALL INDEXED COLUMNS',
cascade => TRUE, -- gather index statistics
);
END:
There are plenty of other options if you're interested. Assuming the indexes are different this may help the CBO (assuming it's "turned on") to pick the correct index.
The other options include what server they are on and what the database parameters are. If they're on different servers then the relative "power", disk-speed, I/O and a never-ending list of other options could easily cause a difference. If the database parameters are different then you have the same problem.
Database tuning is an art as much as a science. Oracle has a whole book on it and there are plenty of other resources out there.
A few observations:
your index is a DESCENDING index. This is a function based index, as such, it won't work as expected under the RULE optimizer.
your 9i plan shows access only on OLD_IND, your 10g plan (you cut off the important predicate bits) shows a range scan + inlist iterator, so depending on that RED_PK, it may be accessing on MEAS_ASS_ID which is perhaps more selective.
in terms of indexing too, to answer your query WHERE OLD_IND = 0 AND MEAS_ASS_ID IN (5022, 5003) AND read_date BETWEEN ie OLD_IND equality, MEAS_ASS_ID equality and read_date range scanned, a better index is (OLD_IND , MEAS_ASS_ID , READ_DATE): do the range scan last to cut down on IO.
Have you tried running an explain on the queries on the two servers, the query optimiser for 9i is different from the one for 10g. The 10g query optimiser is much faster and parallelised. Check out the following link Upgrading Query Optimiser
explain SELECT * FROM t_reading WHERE OLD_IND = 0 AND MEAS_ASS_ID IN (5022, 5003) AND read_date BETWEEN to_date('30/10/2012', 'dd/mm/yyyy') AND to_date('31/10/2012', 'dd/mm/yyyy');
To optimize SELECT queries, I run them both with and without an index and measure the difference. I run a bunch of different similar queries and try to select different data to make sure that caching doesn't throw off the results. However, on very large tables, indexes take a really long time to create, and I have several different ideas about what indexes would be appropriate.
Is it possible in Oracle (or any other database for that matter) to perform a query but tell the database to not use a certain index when performing the query? Or just turn off the index entirely, but be able to easily switch it back on without having to re-index the entire table? This would make it much easier to test, since I can create all the indexes I'm thinking about all at once, then try my queries using different ones.
Alternatively, is there any better way to go about optimizing queries on large tables and know which indexes would be best to create?
You can set index visibility in 11g -
ALTER INDEX idx1 [ INVISIBLE | VISIBLE ]
this makes it unusable by the optimizer, but oracle still updates the index when data is added or removed. This makes it easy to test performance with the index disabled without having to remove & rebuild the whole index.
See here for the oracle docs on index visibility
You can use the NO_INDEX hint in the queries to ignore the indexes - see docs for further details. The SQL Access Advisor is an Oracle utility that will recommend indexing strategies.
Well you can write the query in such a way that it wont use index(using expression instead of a value)
For example
Select * from foobar where column1 = 'result' --uses index on column1
To avoid using index for a number and varchar
Select * from foobar where column1 + 0 = 5 -- simple expression to disable the index
Select * from foobar where column1 || '' = 'result' --simple expression to disable the index
Or you can just use NVL to disable the index in the query without worrying about the column's data type
Select * from foobar where nvl(column1,column1) = 'result' --i love this way :D
Similarly you can use index hints
like /* Index(E employee_id) */ to use indexes.
P.S. This is all the paraphrased from Dan Tow's Book SQL Tuning. I started reading it a few days back :)