Recently I faced with the problem that some (theoretically irrelevant) formal changes in the code of a function (even adding or removing a space character to or from the code) can greatly affect the performance of the function. (see my previous questions here and here).
The mystery was solved by Jon Heller as
If adding spaces to the code changes performance, this is likely a
plan management issue. Many Oracle tuning tools operate on the SQL_ID,
which is like an MD5 hash of the SQL text. So if you change a single
character of the SQL text, the optimizer treats the code like a brand
new statement. Any plan management fixes, like a SQL profile, or plan
outline, will not be applied to the new statement. Maybe a DBA tuned
an old statement with an /*+ INDEX... */ hint, but that hint isn't
carried over to the new statement. Compare the Note sections in the
DBMS_XPLAN output.
and as
A space in a SQL statement would change the SQL_ID, which could cause
the optimizer to no longer match the SQL statement with plan
management features like profiles, outlines, baselines (possibly -
they're supposed to be able to avoid this problem in some cases),
patches, advanced rewrites, etc.
So the only question I have left is how can I get rid of the stuck bad execution plans? How can I "clean" Oracle from them?
This worked for me:
alter system flush shared_pool;
Related
Is there table in oracle that note which dynamic sql statement have been recently done.
( like "dba_source" to search a package)
There's no difference in the kernel between dynamic and non-dynamic SQL. You can look into v$sql and v$sql derivatives to see what is being executed (or recently executed, or look into dba_hist_sql... views for older statements, though there's no guarantee that you'll see all of them. Note this requires diag pack license.
I'm trying to export something using a select statement that runs for a very long time and I've been getting ORA-01555 snapshot too old errors. I searched for this error and it has something to do with select statement using rollback segment "redo tablespace".
How do I select without getting this error? I don't care about the integrity of the results I'm going to get or any other consequences that this may bring about.
Oracle does not allow to read inconsistent results and does not provide the corresponding isolation level "read uncommitted" (if this is an isolation level at all). If you don't care about consistency, you may split the query in several parts (using different where clauses). If you would like to fix the error, you would have to resize the undo tablespace (or change the undo retention) - but this is a job for a DBA (if it is necessary).
I am facing an ORA:7445 issue with auto sql tuning advisor. Auto sql tuning advisor keeps failing with
ORA:7445
while it's tries to tune a particular SQL.
Is there any way to skip this sql statement from auto sql tuning advisor job?
The simplest way to avoid the Automatic SQL Tuning Advisor may be to convert the query into a form that is not supported by the program.
According to the "Automatic SQL Tuning" chapter of the "Database Performance Tuning Guide":
The database ignores recursive SQL and statements that have been tuned
recently (in the last month), parallel queries, DML, DDL, and SQL
statements with performance problems caused by concurrency issues.
If the query select * from dba_objects was causing problems, try re-writing it like this:
select * from dba_objects
union all
--This query block only exists to avoid the Automatic SQL Tuning Advisor.
select /*+ parallel(dba_objects 2) */ * from dba_objects
where 1=0;
It is now a "parallel query" although it will not truly run in parallel because of the 1=0. I haven't tested this, and I imagine it will be difficult for you to test, because you'll need to flush the existing AWR data to prevent the errors.
This is one of the reasons why I usually disable the Automatic SQL Tuning Advisor. I like the idea of it, but in practice I've literally never seen the tuning advisor provide useful information. All it has ever done for me is generate alerts.
In theory,the package DBMS_AUTO_SQLTUNE contains the parameters BASIC_FILTER, OBJECT_FILTER, and PLAN_FILTER. I assume one of those could be useful but I don't think they are implemented yet. I can't find any references to them on Google or My Oracle Support. And when I entered random text for the values there were no errors.
Ideally we would look up every ORA-00600 and ORA-07445 error, create an SR, and fix the underlying problem. But who has time for that? when you encounter a database "bug", the best solution is usually to avoid it as quickly as possible.
I have the following query which monitors if anyone tried to logon with a technical users on database:
SELECT COUNT (OS_USERNAME)
FROM DBA_AUDIT_SESSION
WHERE USERNAME IN ('USER1','USER2','USER3')
AND TIMESTAMP>=SYSDATE - 10/(24*60)
AND RETURNCODE !='0'
Unfortunately the performance of this SQL is quite poor since it does TABLE ACCESS FULL on sys.aud$. I tried to narrow it with:
SELECT COUNT (sessionid)
FROM sys.aud$
WHERE userid IN ('USER1','USER2','USER3')
AND ntimestamp# >=SYSDATE - 10/(24*60)
AND RETURNCODE !='0'
and action# between 100 and 102;
And it is even worse. Is it possible at all to optimize that query by forcing oracle to use indexes here? I would be grateful for any help&tips.
SYS.AUD$ does not have any default indexes but it is possible to create one on ntimestamp#.
But proceed with caution. The support document "The Effect Of Creating Index On Table Sys.Aud$ (Doc ID 1329731.1)" includes this warning:
Creating additional indexes on SYS objects including table AUD$ is not supported.
Normally that would be the end of the conversation and you'd want to try another approach. But in this case there are a few reasons why it's worth a shot:
The document goes on to say that an index may be helpful, and to test it first.
It's just an index. The SYS schema is special, but we're still just talking about an index on a table. It could slow things down, or maybe cause space errors, like any index would. But I doubt there's any chance it could do something crazy like cause wrong results bugs.
It's somewhat common to change the tablespace of the audit trail, so that table isn't sacred.
I've seen indexes on it before. 2 of the 400 databases I manage have an index on the columns SESSIONID,SES$TID (although I don't know why). Those indexes have been there for years, have been through an upgrade and patches, and haven't caused problems as far as I know.
Creating an "unsupported" index may be a good option for you, if you're willing to test it and accept a small amount of risk.
Oracle 10g onwards optimizer would choose the best plan for your query, provided you write proper joins. Not sure how many recocds exists in your DBA_AUDIT_SESSION , but you can always use PARALLEL hints to somewhat speed up the execution.
SELECT /*+Parallel*/ COUNT (OS_USERNAME)
--select COUNT (OS_USERNAME)
FROM DBA_AUDIT_SESSION
WHERE USERNAME IN ('USER1','USER2','USER3')
AND TIMESTAMP>=SYSDATE - 10/(24*60)
AND RETURNCODE !='0'
Query Cost reduces to 3 than earlier.
NumRows: 8080019
So it is pretty large due to company regulations. Unfortunately using /*+Parallel*/ here makes it run longer, so the performance is still worse.
Any other suggestions?
I want a tool or solution to find out the affected table on running the procedure|Function or package Given the PL/SQL code.
This is require for me to comeup with the better testcase by knowing which all the tables will be affected by running the code and what all the operation performed on them.
The solution should even work for Procedure calling Procedure.
OutPut may be:
SELECT FROM: TABLE1
DELETE FROM: TABLE2
INSERT INTO: TABLE3
CALL AnotherPROC:
SELECT FROM: TABLE4
DELETE FROM: TABLE5
Thanks in Advance:
For a pre-run analysis if you are running a stored procedure/package/function then the DBA_DEPENDENCIES table can tell you which objects "depend" on it, but that doesn't mean they may necessarily be affected because the program control can take different directions.
Post-run analysis you could use AUDITing or tracing to see what tables were affected.
There are several different ways you can get some or all of this information, but I can't think of any method that will give you the information in the exact format you specified.
Tracing
A trace file can record everything, but it's all stored in a text file meant to be read by a human. There are lots of examples for how to do this, here's one that just worked for me: http://tonguc.wordpress.com/2006/12/30/introduction-to-oracle-trace-utulity-and-understanding-the-fundamental-performance-equation/
Profiling
You can use DBMS_PROFILER to record which line numbers are called by the procedure. Then you'd have to join the line numbers to DBA_SOURCE to get the actual commands.
V$SQL
This records SQL statements executed. You could search for SQL by PARSING_SCHEMA_NAME and order by LAST_UPDATE_TIME. But this won't get the PL/SQL, and V$SQL can be difficult to use. (SQL may age out, or could get loaded by someone else, etc.)
But to get exactly what you want, all of these solutions require you to write a program to parse SQL and PL/SQL. I'm sure there are tools to do this, but I have no experience with them.
You can always write your own custom logging, but that's a huge amount of work. The best solution may be to ask the developers to adequately document every function, and list the purpose, inputs, outputs, and side-effects of all their code.
In MySql you can get information on the tables that are being affected by adding the keyword EXPLAIN in the start of your Query. It will give you different information's listed as columns. Check if there is a feature like this in Oracle might help in your scenario.