Oracle sys.aud$ / dba_audit_session monitoring - optimize SQL performance - oracle

I have the following query which monitors if anyone tried to logon with a technical users on database:
SELECT COUNT (OS_USERNAME)
FROM DBA_AUDIT_SESSION
WHERE USERNAME IN ('USER1','USER2','USER3')
AND TIMESTAMP>=SYSDATE - 10/(24*60)
AND RETURNCODE !='0'
Unfortunately the performance of this SQL is quite poor since it does TABLE ACCESS FULL on sys.aud$. I tried to narrow it with:
SELECT COUNT (sessionid)
FROM sys.aud$
WHERE userid IN ('USER1','USER2','USER3')
AND ntimestamp# >=SYSDATE - 10/(24*60)
AND RETURNCODE !='0'
and action# between 100 and 102;
And it is even worse. Is it possible at all to optimize that query by forcing oracle to use indexes here? I would be grateful for any help&tips.

SYS.AUD$ does not have any default indexes but it is possible to create one on ntimestamp#.
But proceed with caution. The support document "The Effect Of Creating Index On Table Sys.Aud$ (Doc ID 1329731.1)" includes this warning:
Creating additional indexes on SYS objects including table AUD$ is not supported.
Normally that would be the end of the conversation and you'd want to try another approach. But in this case there are a few reasons why it's worth a shot:
The document goes on to say that an index may be helpful, and to test it first.
It's just an index. The SYS schema is special, but we're still just talking about an index on a table. It could slow things down, or maybe cause space errors, like any index would. But I doubt there's any chance it could do something crazy like cause wrong results bugs.
It's somewhat common to change the tablespace of the audit trail, so that table isn't sacred.
I've seen indexes on it before. 2 of the 400 databases I manage have an index on the columns SESSIONID,SES$TID (although I don't know why). Those indexes have been there for years, have been through an upgrade and patches, and haven't caused problems as far as I know.
Creating an "unsupported" index may be a good option for you, if you're willing to test it and accept a small amount of risk.

Oracle 10g onwards optimizer would choose the best plan for your query, provided you write proper joins. Not sure how many recocds exists in your DBA_AUDIT_SESSION , but you can always use PARALLEL hints to somewhat speed up the execution.
SELECT /*+Parallel*/ COUNT (OS_USERNAME)
--select COUNT (OS_USERNAME)
FROM DBA_AUDIT_SESSION
WHERE USERNAME IN ('USER1','USER2','USER3')
AND TIMESTAMP>=SYSDATE - 10/(24*60)
AND RETURNCODE !='0'
Query Cost reduces to 3 than earlier.

NumRows: 8080019
So it is pretty large due to company regulations. Unfortunately using /*+Parallel*/ here makes it run longer, so the performance is still worse.
Any other suggestions?

Related

How to find and delete unused indexes in oracle?

Am pretty much new to Database activity. We are using Oracle 19C and am having only SQL Developer with me. I have a task to find and remove unused indexes from all tables of a particular owner (say owner=QTSD). Am using below query:
select * from ALL_INDEXES where owner='QTSD';
From the result, am checking LAST_ANALYZED column and then which was analyzed last year those indexes I will drop it one by one. Is this the correct way to implement? Also does deleting the indexes need DB downtime as corresponding tables have ~100millions of data?
The fact that it was last analyzed a long time ago doesn't mean much. Why didn't you analyze it more often?
The optimizer decides whether to use and index or not. You can monitor its usage by e.g.
alter index my_index monitoring usage;
Do that index-by-index (if there are some really suspicious), or write a script which will write that for you:
select 'alter index ' || index_name || ' monitoring usage;' from user_indexes;
Copy/paste that script to execute them all.
Then wait some time (how long? I can't tell, might take a lot of time until many SQLs have been executed in the database so that you'd actually gather some statistics about the usage), and then query v$object_usage; pay attention to the USED column value.
After that, you'll be somewhat smarter (meaning: you won't be just lucky guessing whether some index was used or not) and, hopefully, be able to drop unused indexes.
It is better to use database index monitoring features in order to find unused index.
See this if you are on version 12.1 or lower. See here If your DB is 12.2 or newer

LAST_ANALYZE null after dbms_stats.gather_table_stats

We have a replication with goldengate from a prod environment.
The tables got initial dumped from the prod and afterwards we started the replication with goldengate. Now we want to migrate the data to another database. But the query plans are different from the prod environment. We think it is because all statistics from the database of the replication are broken/wrong.
The number of rows stated in dba_tables are null, 0 or differs 50-80%.
We tried to do dbms_stats.gather_table_stats on all relevant tables.
It's still broken. We run that querie for all tables that had wrong statistics:
dbms_stats.GATHER_TABLE_STATS(OWNNAME => 'SCHEMA', TABNAME => 'TABLE_NAME', CASCADE => true);
We can't migrate with the bad queryplans.
We are using Oracle Release 12.2.0.1.0 - Production
EDIT: After the answer of #Jon Heller we saw that some indices are partitioned in the prod environment not in the replication. Additionally the global preference DEGREE is 32768 on the replication and NULL on prod.
Are the tables built exactly the same way? Maybe a different table structure is causing the statistics to break, like if one table is partitioned and another is not. Try comparing the DDL:
select dbms_metadata.get_ddl('TABLE', 'TABLE1') from dual;
I'm surprised to hear that statistics are wrong even after gathering stats. Especially the number of rows - since 10g, that number should always be 100% accurate with the default settings.
Can you list the exact commands you are using to gather stats? Also, this is a stretch, but possibly the global preference were changed on one database. It would be pretty evil, but you could set a database default to only look at 0.00001% of the data, which would create terrible statistics. Check your global preferences between both databases.
--Thanks to Tim Hall for this query: https://oracle-base.com/dba/script?category=monitoring&file=statistics_prefs.sql
SELECT DBMS_STATS.GET_PREFS('AUTOSTATS_TARGET') AS autostats_target,
DBMS_STATS.GET_PREFS('CASCADE') AS cascade,
DBMS_STATS.GET_PREFS('DEGREE') AS degree,
DBMS_STATS.GET_PREFS('ESTIMATE_PERCENT') AS estimate_percent,
DBMS_STATS.GET_PREFS('METHOD_OPT') AS method_opt,
DBMS_STATS.GET_PREFS('NO_INVALIDATE') AS no_invalidate,
DBMS_STATS.GET_PREFS('GRANULARITY') AS granularity,
DBMS_STATS.GET_PREFS('PUBLISH') AS publish,
DBMS_STATS.GET_PREFS('INCREMENTAL') AS incremental,
DBMS_STATS.GET_PREFS('STALE_PERCENT') AS stale_percent
FROM dual;
If gathering statistics still leads to different results, the only thing I can think of is corruption. It may be time to create an Oracle service request.
(This is more of an extended comment than an answer, but it might take a lot of code to diagnose this problem. Please update the original question with more information as you find it.)

SDO_JOIN table join on large tables no longer returns results after upgrade from Oracle 11.2.0.1 to 11.2.0.4

I have the same database on 2 different Oracle servers, one is 11.2.0.1.0 and the other is 11.2.0.4.0.
I have the same 2 geometry tables in both databases and run the following query on both servers. When run on an 11.2.0.1.0 version of Oracle, the query runs for a few minutes and I get results, the same query when run on 11.2.0.4.0 runs for about 3 seconds and returns no results.
The BLPUs table holds 36 million points and the PD_B2 table holds a polygon. I am trying to find all the points that fall in the polygon.
Other spatial queries do return rows but it takes hours and hours whereas the table join suggested in the Oracle Spatial documentation, takes 15 minutes to return all the points.
SELECT /*+ ordered */ a.uprn
FROM TABLE(SDO_JOIN('BLPUS', 'GEOLOC', 'PD_B2', 'GEOLOC','mask=ANYINTERACT')) c, blpus a, PD_B2 b
WHERE c.rowid1 = a.rowid
AND c.rowid2 = b.rowid;
The spatial queryies below return SDO_ROWIDSET() when run on the 11.2.0.4 server
select SDO_JOIN('BLPUS', 'GEOLOC', 'PD_B2', 'GEOLOC','mask=ANYINTERACT')
from dual;
select SDO_JOIN('BLPUS', 'GEOLOC', 'PD_B2', 'GEOLOC')
from dual;
On the 11.2.0.1 server they return results.
I have discovered that a much smaller table of points will work on 11.2.0.4 so it seems that there is a size limit on 11.2.0.4 when using SDO_JOIN where as 11.2.0.1 seems to cope with the large table.
Does anyone know why this is or if there is an actual limit on table size when using SDO_JOIN?
This is strange. I see no reason why SDO_JOIN will not work the same way in 11.2.0.4. At least I have not seen that sort of behavior before. It looks like a bug to me and I suggest you file a service request with Oracle Support so we can take a look. You may need to provide a dump of the tables - or at least of a small enough subset that demonstrates the problem.
That said there are a few things to check: did you apply the 11.2.0.4 patch on the same database ? I.e. nothing changed in terms of table structures or content, grants, etc ?
When you say that the query returns no rows, does it do so immediately ? Or does it perform some processing before completing without anything being returned ?
How large is the PD_B2 table ?
What happens when you do:
select SDO_JOIN('BLPUS', 'GEOLOC', 'PD_B2', 'GEOLOC','mask=ANYINTERACT')
from dual;
Does this also return nothing ?
What happens if you do
select SDO_JOIN('BLPUS', 'GEOLOC', 'PD_B2', 'GEOLOC')
from dual;
Both should return something that looks like this:
SDO_ROWIDSET(SDO_ROWIDPAIR('AAAW3LAAGAAAADmAAu', 'AAAW3TAAGAAAAg7AAC'), SDO_ROWIDPAIR('AAAW3LAAGAAAADmABE', 'AAAW3TAAGAAAAgrAAA'),...)
You will see this if you run the query in sqlplus. [If you use a GUI (like TOAD or SQLDeveloper) then you may not see that. All those GUIs have problems dealing with objects or arrays.]
But the fact that your 11.2.0.4 tests complete very quickly probably means that you get an empty result back, maybe like this:
SDO_ROWIDSET()
and that confirms that something did not work.
Now, from what you say PD_B2 only contains one row ? If so then there is no reason whatsoever to go via SDO_JOIN. A straightforward spatial query is easier to write. SDO_JOIN only makes sense when both tables being joined contain multiple rows. Then again, if one of the tables is very small (like the PD_B2 table in your case), then it anyway falls back to a simple query.
A simple query would look like this:
SELECT a.uprn
FROM blpus a, PD_B2 b
WHERE sdo_anyinteract (a.geoloc, b.geoloc) = 'TRUE';
Does this return what you expect in 11.2.0.4 ?
You may also want to examine the cost of the query and the query plan used. In sqlplus, do this:
set timing on
set autotrace traceonly
then run the above query. The effect is that sqlplus will not display any results - but it will still fetch and format the output: it will just not print them. At the end you will get a printout of the query plan used as well as some execution statistics and the elapsed time. Please add those results to your question.
Running the query from within your application should have a similar profile: similar response time and database-side costs - assuming you do fetch all the 1.3 million rows.
BTW, what do you do with those results ? Do you show them out as a report ? Do you save them into a table for later analysis ? Surely you do not want to show them all on a map ?
To clarify: SDO_JOIN is only for matching many to many geometries. For matching one to many, the simple SDO_ANYINTERACT() is what you need. As a matter of fact, SDO_JOIN will automatically fall back to a simple SDO_ANYINTERACT when one of the sets is very much smaller than the other. I can't see how SDO_JOIN could be faster in your circumstances (i.e. when searching for all objects that match one object) since it will perform the same as an SDO_ANYINTERACT and will require extra joins to the two tables.
Going back to the original issue - that of a difference in behavior in SDO_JOIN between 11.2.0.1 and 11.2.0.4 that IMO is definitely a bug that you need to report to Oracle Support.

Same query, slow on Oracle 9, fast on Oracle 10

We have a table called t_reading, with the following schema:
MEAS_ASS_ID NUMBER(12,0)
READ_DATE DATE
READ_TIME VARCHAR2(5 BYTE)
NUMERIC_VAL NUMBER
CHANGE_REASON VARCHAR2(240 BYTE)
OLD_IND NUMBER(1,0)
This table is indexed as follows:
CREATE INDEX RED_X4 ON T_READING
(
"OLD_IND",
"READ_DATE" DESC,
"MEAS_ASS_ID",
"READ_TIME"
)
This exact table (with the same data) exists on two servers, the only difference is the Oracle version installed on each one.
The query in question is:
SELECT * FROM t_reading WHERE OLD_IND = 0 AND MEAS_ASS_ID IN (5022, 5003) AND read_date BETWEEN to_date('30/10/2012', 'dd/mm/yyyy') AND to_date('31/10/2012', 'dd/mm/yyyy');
This query executes in less than a second on Oracle 10, and around a minute in Oracle 9.
Are we missing something?
EDIT:
Execution plan for Oracle 9:
Execution plan for Oracle 10:
"Are we missing something?"
Almost certainly, but it's difficult for us to tell you what.
There were some performance improvements in the CBO from 9i to 10g but it's unlikely to make that much difference. So it must be some variation in your systems, which is obviously the hardest thing for us to diagnose, blind and remote as we are.
So, first things to rule out are general system diffences - disk speeds, i/o bottlenecks, memory sizing, etc. You say you have two servers, do they have different specs? Whilst it will require assistence from an sysadmin type to investigate these things, we can discount them with a single question: is it just this query, or can you reproduce this effect with many different queries?
If is just the query, there are at least three possible explanations.
One is data distribution. How was the data populated in the two databases? If the 10g was exported from the 9i database was it sorted in some fashion? Even if it wasn't it is possible that the ETL process has compacted and organised the data and built the fresh indexes in a way which improves the access times.
Another is statistics. Are the 10g statistics fresh and realistic, whilst the 9i statistics are stale and misleading?
A third possibility is a stored execution plan. (You have posted a query with literals, this only applies to queries with bind variables.) Searches on date ranges are notoriously hard to tune. A date range of to_date('30/10/2012', 'dd/mm/yyyy') AND to_date('31/10/2012', 'dd/mm/yyyy') suits one sort of plan, whereas date range of to_date('01/01/2010', 'dd/mm/yyyy') AND to_date('31/10/2012', 'dd/mm/yyyy') may well suit a different approach. If the extant plan on the 9i database suits a broader range then queries for a narrow range may take a long time.
While I've been typing this you have published the explain plans. The killer detail is at the bottom of the 9i plan:
Note: rule-based optimization
You haven't got any stats for the table or the index, so the optimizer is applying the dumb defaults of the RBO. You should really address this, but it's not a simple task. You may need to gather stats for all your tables. You may need to change the OPTIMIZER_MODE in the init.ora file. You may need to undertake a regression test of all the queries on your database. So, it's not something you shoudl do lightly.
In the meantime, if this query is bugging you, you'll need to wrnagle the Rule-Based Optimizer the old-fashioned way. Find out more.
A couple of potential explanations:
You're range scanning different indexes.
Assuming that you've got the same index on your 10g table but you've
just called it a different thing the explain plans are different.
The main worry I would have is the lack of information in the rows, bytes and cost column of the explain plan on your 9i query. Oracle 9i does not collect statistics by default and this detail would indicate that you have not collected statistics on this table. Use dbms_stats to gather statistics on your table and the indexes. Specifically the procedure gather_table_stats:
BEGIN
DBMS_STATS.GATHER_TABLE_STATS (
ownname => user,
tabname => 'T_READING',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE
method_opt => 'FOR ALL INDEXED COLUMNS',
cascade => TRUE, -- gather index statistics
);
END:
There are plenty of other options if you're interested. Assuming the indexes are different this may help the CBO (assuming it's "turned on") to pick the correct index.
The other options include what server they are on and what the database parameters are. If they're on different servers then the relative "power", disk-speed, I/O and a never-ending list of other options could easily cause a difference. If the database parameters are different then you have the same problem.
Database tuning is an art as much as a science. Oracle has a whole book on it and there are plenty of other resources out there.
A few observations:
your index is a DESCENDING index. This is a function based index, as such, it won't work as expected under the RULE optimizer.
your 9i plan shows access only on OLD_IND, your 10g plan (you cut off the important predicate bits) shows a range scan + inlist iterator, so depending on that RED_PK, it may be accessing on MEAS_ASS_ID which is perhaps more selective.
in terms of indexing too, to answer your query WHERE OLD_IND = 0 AND MEAS_ASS_ID IN (5022, 5003) AND read_date BETWEEN ie OLD_IND equality, MEAS_ASS_ID equality and read_date range scanned, a better index is (OLD_IND , MEAS_ASS_ID , READ_DATE): do the range scan last to cut down on IO.
Have you tried running an explain on the queries on the two servers, the query optimiser for 9i is different from the one for 10g. The 10g query optimiser is much faster and parallelised. Check out the following link Upgrading Query Optimiser
explain SELECT * FROM t_reading WHERE OLD_IND = 0 AND MEAS_ASS_ID IN (5022, 5003) AND read_date BETWEEN to_date('30/10/2012', 'dd/mm/yyyy') AND to_date('31/10/2012', 'dd/mm/yyyy');

how to log statements that execute on an index in oracle?

I have an index for a table in oracle and I don't know it is being used or not and how it is used. Is there a way to log all statements that uses that index? If not is there a way to know if that index is being used or not?
Thanks,
Understanding the ultimate goal would definitely be helpful.
You can query the GV$SQL_PLAN table to see which queries in the shared pool are using a particular index
SELECT *
FROM gv$sql_plan
WHERE object_owner = <<owner of object>
AND object_name = <<name of index>>
Depending on your system, however, SQL statements may not stay in the shared pool particularly long so you may need to poll relatively frequently to ensure you don't miss anything. Depending on the Oracle version, edition, and licensed options, you may also be able to query the AWR tables to get historical plan information. But that will only have information on queries that were sufficiently expensive that they were captured in an AWR snapshot.
SELECT *
FROM dba_hist_sql_plan
WHERE object_owner = <<owner of object>
AND object_name = <<name of index>>
If what you are trying to accomplish is to figure out whether an index is being used, however, you probably want to use something like index monitoring and let Oracle track which indexes are being used. Be aware, however, that there are pitfalls to this approach. For example indexes on foreign keys that are necessary for efficient deletes may not be flagged as being used by index monitoring (nor will they be caught by just looking at query plans). Index monitoring may also miss cases where statistics on an index are used to come up with an efficient plan despite the index itself not being used.

Resources