Amazon RDS Oracle 12.1 Automatic Report Capturing - oracle

I look after two RDS client sites, both of which are on Oracle 12.1.0.2.v12.
One of the instances has a high CPU load which appears to be caused by the 12.1 "feature" known as Automatic Report Capturing.
It's a known issue:
https://smarttechways.com/2017/10/11/with-monitor_data-as-select-inst_id-query-found-caused-performance-issue/
https://liups.com/wp-content/uploads/2018/01/Document-2102131.1.pdf
and can ordinarily be disabled by running alter system set "_report_capture_cycle_time"=0;
However, as this is an RDS instance this parameter can't be set.
What's puzzling me is that the other site doesn't appear to have this issue - the specific SQL statement doesn't appear in session history:
WITH MONITOR_DATA AS (SELECT INST_ID, KEY, NVL2(PX_QCSID, NULL, STATUS) STATUS, FIRST_REFRESH_TIME, LAST_REFRESH_TIME, REFRESH_COUNT, PROCESS_NAME, SID, SQL_ID, SQL_EXEC_START, SQL_EXEC_ID, DBOP_NAME, DBOP_EXEC_ID, SQL_PLAN_HASH_VALUE, SQL_FULL_PLAN_HASH_VALUE, SESSION_SERIAL#, SQL_TEXT, IS_FULL_SQLTEXT, PX_SERVER#, PX_SERVER_GROUP, PX_SERVER_SET, PX_QCINST_ID, PX_QCSID, CASE WHEN ELAPSED_TIME < (CPU_TIME+ APPLICATION_WAIT_TIME+ CONCURRENCY_WAIT_TIME+ CLUSTER_WAIT_TIME+ USER_IO_WAIT_TIME+ QUEUIN... (truncated)
It's as though the "feature" has been disabled on the other site somehow - or conversely, it's somehow been inadvertently enabled at the problematic site.
Any ideas on how this can be disabled?

Normally if you have huge amount of CPU consumption, you will probably get error messages in the alert log of this type
Thu Sep 08 04:00:41 2016
Errors in file /app/oracle/diag/rdbms/dbname/dbinstance/trace/dbinstance_m002_14490.trc:
ORA-12850: Could not allocate slaves on all specified instances: 3 needed, 2 allocated
If that is the case, indeed disable the feature can only be done by
alter system set "_report_capture_cycle_time"=0; /* Default is 60 seconds */
Reason:
If the CPU consumption is significantly high then it is not an
expected behaviour and could be due to optimizer choosing suboptimal
plan for the SQL statements. This can happen due to Adaptive
Optimization, a new feature in 12c.
Therefore, if you can't change the hidden parameter, you might try to disable Adaptive Optimization all together
alter system set optimizer_adaptive_features = false scope=both ;
As the documentation states
In 12.1, adaptive optimization as a whole is controlled by the dynamic
parameter optimizer_adaptive_features, which defaults to TRUE. All of
the features it controls are enabled when optimizer_features_enable >=
12.1.
Either you upgrade to 19c, or you disable all optimizer adaptive features
Amazon RDS now supports 19c

"optimizer_adaptive_features" is not related to "_report_capture_cycle_time" anyhow.

Related

What is the Oracle PL/SQL "--+rule"?

I am working with legacy SQL code and I am finding a lot of queries like the following:
SELECT --+rule
username,
usernotes
FROM
userinfotable
ORDER BY
username
I read the Oracle Optimizer Hints documentation, but I can't find an exact reference for a --+rule. I am thinking this rule is possibly an obsolete artifact from a code generation tool that may have been designed to replace "--+rule" with user or generated /*+ SQL */ hint code.
What do you think? Does the --+rule code [literally] in the above example actually do anything as-is? or can I just discard it?
Platform = Delphi 6 with Direct Oracle Access components, Oracle 10g2 with last supported updates. Most of the Legacy SQL code was developed when using Oracle 7 and 8.
The answer is in the referenced documentation you have given:
The following syntax shows hints contained in both styles of comments
that Oracle supports within a statement block.
{DELETE|INSERT|MERGE|SELECT|UPDATE} /*+ hint [text] [hint[text]]... */
or
{DELETE|INSERT|MERGE|SELECT|UPDATE} --+ hint [text] [hint[text]]...
The --+ hint format requires that the hint be on only one line.
So it was an allowed syntax rule for hints: but I think I have never seen it.
In Oracle SQL "rule" hint means use the Rule Based Optimizer (RBO) instead of CBO (Cost Based Optimizer): since Oracle 10 it is no more supported. So for Oracle you cannot discard it: it should be taken into account but without support ...
10.2 doc says:
Rule-based Optimization (RBO) Obsolescence
RBO as a functionality is no longer supported. RBO still exists in
Oracle 10g Release 1, but is an unsupported feature. No code changes
have been made to RBO and no bug fixes are provided. Oracle supports
only the query optimizer, and all applications running on Oracle
Database 10g Release 1 (10.1) should use that optimizer. Please review
the following Oracle Metalink desupport notice (189702.1) for RBO:
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_
database_id=NOT&p_id=189702.1
You can also access desupport notice 189702.1 and related notices by
searching for "desupport of RBO" at:
http://metalink.oracle.com
Notice 189702.1 provides details about the desupport of RBO and the
migration of applications based on RBO to query optimization.
Some consequences of the desupport of RBO are:
CHOOSE and RULE are no longer supported as OPTIMIZER_MODE initialization parameter values and a warning is displayed in the
alert log if the value is set to RULE or CHOOSE. The functionalities
of those parameter values still exist but will be removed in a future
release. See "OPTIMIZER_MODE Initialization Parameter" for information
optimizer mode parameters.
ALL_ROWS is the default value for the OPTIMIZER_MODE initialization parameter.
The CHOOSE and RULE optimizer hints are no longer supported. The functionalities of those hints still exist but will be removed in a
future release.
Existing applications that previously relied on rule-based optimization (RBO) need to be moved to query optimization.

Oracle Bind Query is very slow

I have an Oracle bind query that is extremely slow (about 2 minutes) when it executes in my C# program but runs very quickly in SQL Developer. It has two parameters that hit the tables index:
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
Also, if I remove the bind variables and create dynamic sql, it runs just like it does in SQL Developer.
Any suggestion?
BTW, I'm using ODP.
If you are replacing the bind variables with static varibles in sql developer, then you're not really running the same test. Make sure you use the bind varibles, and if it's also slow you're just getting bit by a bad cached execution plan. Updating the stats on that table should resolve it.
However if you are actually using bind variables in sql developers then keep reading. The TLDR version is that parameters that ODP.net run under sometimes cause a slightly more pessimistic approach. Start with updating the stats, but have your dba capture the execution plan under both scenarios and compare to confirm.
I'm reposting my answer from here: https://stackoverflow.com/a/14712992/852208
I considered flagging yours as a duplicate but your title is a little more concise since it identifies the query does run fast in sql developer. I'll welcome advice on handling in another manner.
Adding the following to your config will send odp.net tracing info to a log file:
This will probably only be helpful if you can find a large gap in time. Chances are rows are actually coming in, just at a slower pace.
Try adding "enlist=false" to your connection string. I don't consider this a solution since it effecitively disables distributed transactions but it should help you isolate the issue. You can get a little bit more information from an oracle forumns post:
From an ODP perspective, all we can really point out is that the
behavior occurs when OCI_ATR_EXTERNAL_NAME and OCI_ATR_INTERNAL_NAME
are set on the underlying OCI connection (which is what happens when
distrib tx support is enabled).
I'd guess what you're not seeing is that the execution plan is actually different (meaning the actual performance hit is actually occuring on the server) between the odp.net call and the sql developer call. Have your dba trace the connection and obtain execution plans from both the odp.net call and the call straight from SQL Developer (or with the enlist=false parameter).
If you confirm different execution plans or if you want to take a preemptive shot in the dark, update the statistics on the related tables. In my case this corrected the issue, indicating that execution plan generation doesn't really follow different rules for the different types of connections but that the cost analysis is just slighly more pesimistic when a distributed transaction might be involved. Query hints to force an execution plan are also an option but only as a last resort.
Finally, it could be a network issue. If your odp.net install is using a fresh oracle home (which I would expect unless you did some post-install configuring) then the tnsnames.ora could be different. Host names in tnsnams might not be fully qualified, creating more delays resolving the server. I'd only expect the first attempt (and not subsequent attempts) to be slow in this case so I don't think it's the issue but I thought it should be mentioned.
Are the parameters bound to the correct data type in C#? Are the columns key1 and key2 numbers, but the parameters :key1 and :key2 are strings? If so, the query may return the correct results but will require implicit conversion. That implicit conversion is like using a function to_char(key1), which prevents an index from being used.
Please also check what is the number of rows returned by the query. If the number is big then possibly C# is fetching all rows and the other tool first pocket only. Fetching all rows may require many more disk reads in that case, which is slower. To check this try to run in SQL Developer:
SELECT COUNT(*) FROM (
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
)
The above query should fetch the maximum number of database blocks.
Nice tool in such cases is tkprof utility which shows SQL execution plan which may be different in cases above (however it should not be).
It is also possible that you have accidentally connected to different databases. In such cases it is nice to compare results of queries.
Since you are raising "Bind is slow" I assume you have checked the SQL without binds and it was fast. In 99% using binds makes things better. Please check if query with constants will run fast. If yes than problem may be implicit conversion of key1 or key2 column (ex. t.key1 is a number and :key1 is a string).

Inactive sessions in Oracle

I would like to ask a question :
This is my environment :
Solaris Version 10; Sun OS Version 5.10
Oracle Version: 11g Enterprise x64 Edition.
When I am running this query :
select c.owner, c.object_name, c.object_type,b.sid, b.serial#, b.status, b.osuser, b.machine
from v$locked_object a , v$session b, dba_objects c
where b.sid = a.session_id
and a.object_id = c.object_id;
Sometimes I get many status to be 'INACTIVE'.
What does this inactive mean?
Does this will make my db and application slow?
What are the affects of active and inactive status?
What does this inactive mean?
Just before the oracle executable executes a read to get the next "command" that it should execute for its session, it will set its session's state to INACTIVE. After the read completes, it will set it to ACTIVE. It will remain in that state until it is done executing the requested work. Then it will do the whole thing all over again.
Does this will make my db and application slow?
Not necessarily. See the answer to your final question.
What are the affects of active and inactive status?
The consequences of a large number of sessions (ACTIVE or INACTIVE) are significant in two ways.
The first is if the number is monotonically increasing, which would lead one to investigate the possibility that the application is leaking connections. I'm confident that such a catastrophe is not the case otherwise you would have mentioned it specifically.
The second, where the number fluctuates within the declared upper bound, is more likely. According to Andrew Holdsworth and other prominent members of the RWP, some architects allow too many connections in the application's connection pool and they demonstrate what happens (response time and availability consequences) when it is too high. They also have a prescription for how to better define the connection pool's attributes and behavior.
The essence of their argument is that by allowing a large number of connections in the pool, you allow them to all be busy at the same time. Rather than having the application tier queue transactions, the database server may have to play a primary role in queuing for low level resources like disk, CPU, network, and even other things like enqueues.
Even if all the sessions are busy for only a short time and they're contending for various resources, the contention is wasteful and can repeat over and over and over again. It makes more sense to spend extra time devising a good user experience queueing model so that you don't waste resources on what is undoubtedly the most expensive (hardware and software licenses) tier in your architecture.
ACTIVE means the session is currently executing some SQL operations whereas INACTIVE means the opposite. Check out the ORACLE v$session documentation
By nature, a high number of ACTIVE sessions will slow down the whole DBMS including your application. To what extent is hard to say - here you have to look at IO, CPU, etc. loads.
Inactive sessions will have a low impact unless you exceed the maximum session number.
In simple words, an INACTIVE status in v$session means no SQL statement is being executed at the time you check in v$session.
On other hand, if you see a lot of inactive sessions, then first check the last activity time for each session.
What I suspect is that, they might be part of a connection pool, hence they might be getting used frequently. You shouldn't worry about it, since from an application connection perspective, the connection pool will take care of it. You might see such INACTIVE sessions for quite a long time.
It does not at all suggest D/B is slow.
Status INACTIVE means session is not executing any query now.
ACTIVE means it's executing query.

Cognos report performance and cache

I am working on Cognos 8, one of my report take roughly 1 minute to run but sometime 20 seconds as it loads from cache. Now for few needs I want to prove that report ran from cache for second time, how can I prove that? Is the performance is logged some where?
Cognos 8 uses old 32-bit CQM engine.
The cache of this engine is very primitive:
Cache only works in same session.
Only works if the query is identical.
By defualt it cache the last 5 queries.
So based on limitation I wrote above you can do the following:
Run the report in different session (different browser or user or user).
Change any value in the prompt for different value.
This will ensure the report is not running from cache.
if you want to trace performance of queries, then using DB to capture the queries is the most efficient way. The alternative would be activating Congos ipf trace:
Cognos 8 report performance issues

Actively tracking oracle query performance

Background:
We have a database environment where views are calling views which are calling views... the logic has become complex and now changes to underlying views can have significant impact on the top view being called by the client application.
Now while we are documenting all the logic and figuring out how to unwind everything development continues on and performance continues to degrade.
Currently I would manually run an explain plan on a client query and dig into tuning it. This is a slow and tedious process and changes may not be examined for ages.
Problem:
I want to generate a report that lists SQL ID and lists changes in actual time/discrepancy between estimated rows and actual rows/changes in buffers/changes in reads in comparison to the average computed over the last month.
I would generally run the following script manually and examine it based just on that day's response.
ALTER SESSION SET statistics_level=all;
set linesize 256;
set pagesize 0;
set serveroutput off;
-- QUERY
SELECT
*
FROM
table (DBMS_XPLAN.display_cursor (NULL, NULL, 'ALLSTATS LAST'));
What I am trying to do is see about automating the explain plan query and inserting the statistics into a table. From there I can run a regression report to detect changes in the performance which can then alert the developers.
I was thinking something like this would be common enough without having to resorting to the OEM. I can't find anything so I wonder if there is a more common approach to this?
Oracle provides functionality for this with the Automatic Workload Repositiory. http://docs.oracle.com/cd/E11882_01/server.112/e16638/autostat.htm
It's an extra license on top of Enterprise Edition though, I believe. It ought to be usable in non-production environments without additional cost, but check with your Oracle sales rep.
It sounds like you are on the road to re-inventing STATSPACK. Oracle still include this in their database but don't document it any more, presumably because it's free, unlike AWR and ASH. You can still find the documentation in the 9i manual.
Active Session History (ASH) is what you are looking for
select * from v$active_session_history where sql_id = :yoursqlid

Resources