In Oracle (looking at this in Oracle Enterprise Manager) I have a particular SQL Plan Baseline which is set to Enabled = YES.
However, I can't get the ACCEPTED = YES to work. According to the documentation, you need ENABLED and ACCEPTED both.
I tried to "evolve" the Baseline but OEM complains (via a Report) that it's 1.06 times worse. That's not true though.
Also I would like to know how to ensure it does not auto-purge over time and that's it's fixed. Thanks!
To enable baseline usage, optimizer_use_sql_plan_baselines must be true.
SELECT * FROM dba_sql_plan_baselines
Consider a baseline for the plan with the lowest cost or best elapsed time. The optimizer will choose the lowest cost accepted plan but will give preference to a fixed plan. This is the best way to guarantee the CBO to use a plan, despite what hints and SQL profiles there are for a plan.
Use your baseline is loaded then get the sql handle from the dba_sql_plan_baselines view.
Try to evolve it using:
SET LONG 10000
SELECT DBMS_SPM.evolve_sql_plan_baseline(sql_handle => 'SQL_handle_xxxxxx') FROM dual;
var report clob;
exec :report := dbms_spm.evolve_sql_plan_baseline();
print :report
Evolution of the plan will only work as more plans become available.
Sometimes you need to fix it to force the baseline, for example:
SET SERVEROUTPUT ON
DECLARE
l_plans_altered PLS_INTEGER;
BEGIN
l_plans_altered := DBMS_SPM.alter_sql_plan_baseline(
sql_handle => 'SQL_handle_xxxxxx',
plan_name => 'SQL_PLAN_xxxxxxx',
attribute_name => 'fixed',
attribute_value => 'YES');
DBMS_OUTPUT.put_line('Plans Altered: ' || l_plans_altered);
END;
/
Usually the plan will not be used immediately and there are various methods to try and force it...
One is to kill all sessions running that statement, if viable.
You can also invalidate the cursor using the table:
begin
dbms_stats.gather_table_stats(ownname=> '<schema>',
tabname=> '<table>', no_invalidate => FALSE);
end;
Analyzing using the old method will also invalidate the cursor!
SQL> analyze table <table> estimate statistics sample 1 percent;
To check:
SQL> select child_number, executions, parse_calls, loads, invalidations
from v$sql where sql_id = '<SQLID>';
Sequence of events:
Lock the user
Invalidate the cursor
Kill all sessions logged in as that user
Unlock the user
Check again
Related
Let's assume there is a C++ application executing a specific SQL query on Oracle database.
This query was working fine for last couple of years in production at a customer's environment, but suddenly one fine day the query started taking around 10x more time to execute. (Assume there is constant addition of data in the tables on which this query works ).
While doing the analysis the experts found that Oracle's optimizer is not generating an optimal plan because of may reasons related to DB statistics/data skew/all the other parameters which can influence optimizer to generate sub optimal plan.
Forcing the optimizer by placing hints in the query to generate a good execution plan works perfectly.
Application development team is now being pressurised to change the application code and inject the hint in the query when it's being constructed.
Application development team doesn't want to change application code because they have hundreds of other customers who are not complaining about this specific query performance.
Changing application code also means more maintenance cost as they will need mechanism in place to disable the hint it the hint is no longer needed when customer upgrades database to newer release.
The customer in question is not willing to hire a DBA who can execute SQL command to tune the query using plan baseline feature.
What are the options for application development team in this case?
Create a manual SQL profile to inject the hints without changing the application SQL. While this is still a "DBA" task, it only requires running a single command and is much simpler than SQL Plan Baselines.
Use the below PL/SQL block. Replace the SQL_ID, name, description, and list of hints, and then send the customer the full command.
--Large SQL statements are better loaded from SQL_ID.
declare
v_sql_id constant varchar2(128) := '2z0udr4rc402m'; --CHANGE ME
v_sql clob;
begin
--Find the SQL, it should be in one of these.
begin
select sql_fulltext into v_sql from gv$sql where sql_id = v_sql_id and rownum = 1;
exception when no_data_found then null;
end;
if v_sql is null then
begin
select sql_text into v_sql from dba_hist_sqltext where sql_id = v_sql_id and rownum = 1;
exception when no_data_found then
raise_application_error(-20000, 'Could not find this SQL_ID in GV$SQL or DBA_HIST_SQLTEXT.');
end;
end if;
--Create profile.
dbms_sqltune.import_sql_profile
(
sql_text => v_sql,
name => 'Some_Meaningful_Name', --CHANGE ME
description => 'Add useful description here.', --CHANGE ME
force_match => true,
profile => sqlprof_attr('full(a)', 'parallel(8)') --CHANGE ME
);
end;
/
The "best" solution would be to dig into the customer's system and find out why they are different than everyone else. Do they have some weird optimizer parameter, did they disable the stats job, etc. But if this is the only problem the customer has, then I'd just use the SQL Profile and call it done.
I have a performance problem.
First PL/SQL (most time never ends and OS database process is always over 90%):
DECLARE
myId nvarchar2(10) := '0;WF21izb0';
BEGIN
insert into MY_TABLE (select * from MY_VIEW where ID = myId);
END;
Second PL/SQL (ends with successfull result in 50s):
BEGIN
insert into MY_TABLE (select * from MY_VIEW where ID = '0;WF21izb0');
END;
select count(*) from MY_VIEW
is also a not ending call, there are a lot of table joins behind this view.
select count(*) from MY_VIEW where ID = '0;WF21izb0'
ends in 50s with count=60000.
Can somebody explain me the reason why my first PL/SQL is not finishing after 50s? What is the difference between using static string and declared parameter?
It boils down to what the DB engine knows about your data and your query, when preparing the query execution plan.
When a literal is placed in your query, it is a part of your query, so is known to the engine responsible for preparing the plan. It can take that literal value into account and decide on an execution plan, that is suitable, e.g. based on the DB data statistics (e.g. that this value is rare).
When you are using a PL/SQL variable, the actual query, for which the plan is determined, is different. It's something like:
insert into MY_TABLE (select * from MY_VIEW where ID = :param)
As you can see, the DB engine has now no information on the value, which will be used, when the query gets executed. So the best plan for such a scenario, is to prepare something, which is averagely good for most of the probable values (i.e. see what values in the DB will match this place most often, i.e. values that are prevalent).
If your data is unbalanced, and the '0;WF21izb0' value is rare (or even non-existent) in your data, a selective index may be used to narrow down, what needs to be processed, relatively soon in the critical parts of the execution plan. This plan will however backfire, when you'll use a value, which is all over the place - use of the index will be counter-productive. A better plan for such case may be a full table scan. Possibly the same one, which is used when executing select count(*) from MY_VIEW.
If you are faced with a scenario, where you do not know the filtering value upfront, you'll have to analyze the view code, and try to adjust it so it can be effectively used also for less "selective" values. You could try applying some optimizer hints to the query. You could also resign from using a view, and try your luck with a tabular function, where you can push your filtering predicates to the exact spots of the query, where they can be used most effectively.
Edit:
All in all, follow the advises from the question comments, and examine your execution plans and execution profile data. You should be able to find the culprit. From there it may not be obvious, what the solution is, but still, you know your data and relations much better than us.
I was checking some traces, but after reading the comment of APC and the answer of Hilarion i end up in this solution:
declare
sql_stmt VARCHAR2(200);
id VARCHAR2(10) := '0;WF21izb0';
BEGIN
sql_stmt := 'insert into MY_TABLE (select * from MY_VIEW where ID = :1)';
EXECUTE IMMEDIATE sql_stmt using id;
END;
This is done in 50s, and id can be now a function/procedure parameter.
Thanks for the comments.
We have one of our system that perform quite a bit of database activity in terms of INSERT/UPDATE/DELETE statements against various tables. Because of this the statistics became stale and this is reflected in overall performance.
We want to create a scheduled job that would periodically invoke DBMS_STATS.GATHER_SCHEMA_STATS. Because we don't want actual stats gathering itself to impact the system processing even more we are thinking to collect statistics quite frequent and use GATHER STALE option:
DBMS_STATS.GATHER_SCHEMA_STATS(OWNNAME => 'MY_SCHEMA', OPTIONS =>'GATHER STALE')
This executes almost instantly but running this statement below before and after stats gathering seems to bring back the same records with the same values:
SELECT * FROM user_tab_modifications WHERE inserts + updates + deletes > 0;
The very short time taking to execute and the fact that user_tab_modifications content stays the same makes me question if OPTIONS =>'GATHER STALE' actually does what we expect it should do. On the other hand if I run this again before and after statistics gathering I can see the tables reported as stale before re no longer reported as stale after:
DECLARE
stale dbms_stats.objecttab;
BEGIN
DBMS_STATS.GATHER_SCHEMA_STATS(ownname => 'MY_SCHEMA', OPTIONS =>'LIST STALE', objlist => stale);
FOR i in 1 .. stale.count
LOOP
dbms_output.put_line( stale(i).objName );
END LOOP;
END;
On another hand if lets say my_table is one of my tables being listed as part of the tables that part of the user_tab_modifications with inserts + updates + deletes > 0 and I run I can see my_table no longer being reported as having changes.
EXECUTE DBMS_STATS.GATHER_TABLE_STATS(ownname => 'MY_SCHEMA', tabname => 'MY_TABLE');
So my questions are:
Is my approach correct. Can I trust I am getting fresh stats just by running options => 'GATHER STALE' or I should manually collect stats for all tables that come back with a reasonable number of inserts, updates, deletes?
When user_tab_modifications would actually get reset; obviously GATHER STALE option does not seem to do it
We are using Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
Got the following info from Oracle docs.
You should enable monitoring if you use GATHER_DATABASE_STATS or GATHER_SCHEMA_STATS with the GATHER AUTO or GATHER STALE options.
This view USER_TAB_MODIFICATIONS is populated only for tables with the MONITORING attribute. It is intended for statistics collection over a long period of time. For performance reasons, the Oracle Database does not populate this view immediately when the actual modifications occur. Run the FLUSH_DATABASE_MONITORING_INFO procedure in the DBMS_STATS PL/SQL package to populate this view with the latest information. The ANALYZE_ANY system privilege is required to run this procedure.
Hope this helps you to identify which of your assumptions are incorrect and understand the correct usage of "GATHER STALE".
I'm trying to analyse a query execution plan in my Oracle database. I have set
alter system set statistics_level = all;
Such that I can compare estimated cardinalities and times with actual cardinalities and times. Now, I'm running this statement in order to display that information.
select * from table(dbms_xplan.display_cursor(
sql_id => '6dt9vvx9gmd1x',
cursor_child_no => 2,
FORMAT => 'ALLSTATS LAST'));
But I keep getting this message
NOTE: cannot fetch plan for SQL_ID: 6dt9vvx9gmd1x, CHILD_NUMBER: 2
Please verify value of SQL_ID and CHILD_NUMBER;
It could also be that the plan is no longer in cursor cache (check
v$sql_plan)
The CHILD_NUMBER was correct when the query was being executed. Also, when I run dbms_xplan.display_cursor at the same time as the query, I get the actual plan. But my JDBC connection closes the PreparedStatement immediately after execution, so maybe that's why the execution plan disappears from v$sql_plan.
Am I getting something wrong, or how can I analyse estimated/actual values after execution?
You could always pin the cursor, which is new in 11g -
dbms_shared_pool.keep ('[address, hash_value from v$open_cursor]', 'C');
Increase the shared_pool to create more caching space for the cursors.
If in 11g, capture the sql plan in the baselines using optimizer_capture_sql_plan_baselines. This stores the plans in dba_sql_plan_baselines.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
i work with sql server, but i must migrate to an application with Oracle DB.
for trace my application queries, in Sql Server i use wonderful Profiler tool. is there something of equivalent for Oracle?
I found an easy solution
Step1. connect to DB with an admin user using PLSQL or sqldeveloper or any other query interface
Step2. run the script bellow; in the S.SQL_TEXT column, you will see the executed queries
SELECT
S.LAST_ACTIVE_TIME,
S.MODULE,
S.SQL_FULLTEXT,
S.SQL_PROFILE,
S.EXECUTIONS,
S.LAST_LOAD_TIME,
S.PARSING_USER_ID,
S.SERVICE
FROM
SYS.V_$SQL S,
SYS.ALL_USERS U
WHERE
S.PARSING_USER_ID=U.USER_ID
AND UPPER(U.USERNAME) IN ('oracle user name here')
ORDER BY TO_DATE(S.LAST_LOAD_TIME, 'YYYY-MM-DD/HH24:MI:SS') desc;
The only issue with this is that I can't find a way to show the input parameters values(for function calls), but at least we can see what is ran in Oracle and the order of it without using a specific tool.
You can use The Oracle Enterprise Manager to monitor the active sessions, with the query that is being executed, its execution plan, locks, some statistics and even a progress bar for the longer tasks.
See: http://download.oracle.com/docs/cd/B10501_01/em.920/a96674/db_admin.htm#1013955
Go to Instance -> sessions and watch the SQL Tab of each session.
There are other ways. Enterprise manager just puts with pretty colors what is already available in specials views like those documented here:
http://www.oracle.com/pls/db92/db92.catalog_views?remark=homepage
And, of course you can also use Explain PLAN FOR, TRACE tool and tons of other ways of instrumentalization. There are some reports in the enterprise manager for the top most expensive SQL Queries. You can also search recent queries kept on the cache.
alter system set timed_statistics=true
--or
alter session set timed_statistics=true --if want to trace your own session
-- must be big enough:
select value from v$parameter p
where name='max_dump_file_size'
-- Find out sid and serial# of session you interested in:
select sid, serial# from v$session
where ...your_search_params...
--you can begin tracing with 10046 event, the fourth parameter sets the trace level(12 is the biggest):
begin
sys.dbms_system.set_ev(sid, serial#, 10046, 12, '');
end;
--turn off tracing with setting zero level:
begin
sys.dbms_system.set_ev(sid, serial#, 10046, 0, '');
end;
/*possible levels:
0 - turned off
1 - minimal level. Much like set sql_trace=true
4 - bind variables values are added to trace file
8 - waits are added
12 - both bind variable values and wait events are added
*/
--same if you want to trace your own session with bigger level:
alter session set events '10046 trace name context forever, level 12';
--turn off:
alter session set events '10046 trace name context off';
--file with raw trace information will be located:
select value from v$parameter p
where name='user_dump_dest'
--name of the file(*.trc) will contain spid:
select p.spid from v$session s, v$process p
where s.paddr=p.addr
and ...your_search_params...
--also you can set the name by yourself:
alter session set tracefile_identifier='UniqueString';
--finally, use TKPROF to make trace file more readable:
C:\ORACLE\admin\databaseSID\udump>
C:\ORACLE\admin\databaseSID\udump>tkprof my_trace_file.trc output=my_file.prf
TKPROF: Release 9.2.0.1.0 - Production on Wed Sep 22 18:05:00 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
C:\ORACLE\admin\databaseSID\udump>
--to view state of trace file use:
set serveroutput on size 30000;
declare
ALevel binary_integer;
begin
SYS.DBMS_SYSTEM.Read_Ev(10046, ALevel);
if ALevel = 0 then
DBMS_OUTPUT.Put_Line('sql_trace is off');
else
DBMS_OUTPUT.Put_Line('sql_trace is on');
end if;
end;
/
Just kind of translated http://www.sql.ru/faq/faq_topic.aspx?fid=389 Original is fuller, but anyway this is better than what others posted IMHO
GI Oracle Profiler v1.2
It's a Tools for Oracle to capture queries executed similar to the SQL Server Profiler.
Indispensable tool for the maintenance of applications that use this database server.
you can download it from the official site iacosoft.com
Try PL/SQL Developer it has a nice user friendly GUI interface to the profiler. It's pretty nice give the trial a try. I swear by this tool when working on Oracle databases.
http://www.allroundautomations.com/plsqldev.html?gclid=CM6pz8e04p0CFQjyDAodNXqPDw
Seeing as I've just voted a recent question as a duplicate and pointed in this direction . . .
A couple more - in SQL*Plus - SET AUTOTRACE ON - will give explain plan and statistics for each statement executed.
TOAD also allows for client side profiling.
The disadvantage of both of these is that they only tell you the execution plan for the statement, but not how the optimiser arrived at that plan - for that you will need lower level server side tracing.
Another important one to understand is Statspack snapshots - they are a good way for looking at the performance of the database as a whole. Explain plan, etc, are good at finding individual SQL statements that are bottlenecks. Statspack is good at identifying the fact your problem is that a simple statement with a good execution plan is being called 1 million times in a minute.
The Catch is Capture all SQL run between two points in time. Like the way SQL Server also does.
There are situations where it is useful to capture the SQL that a particular user is running in the database. Usually you would simply enable session tracing for that user, but there are two potential problems with that approach.
The first is that many web based applications maintain a pool of persistent database connections which are shared amongst multiple users.
The second is that some applications connect, run some SQL and disconnect very quickly, making it tricky to enable session tracing at all (you could of course use a logon trigger to enable session tracing in this case).
A quick and dirty solution to the problem is to capture all SQL statements that are run between two points in time.
The following procedure will create two tables, each containing a snapshot of the database at a particular point. The tables will then be queried to produce a list of all SQL run during that period.
If possible, you should do this on a quiet development system - otherwise you risk getting way too much data back.
Take the first snapshot
Run the following sql to create the first snapshot:
create table sql_exec_before as
select executions,hash_value
from v$sqlarea
/
Get the user to perform their task within the application.
Take the second snapshot.
create table sql_exec_after as
select executions, hash_value
from v$sqlarea
/
Check the results
Now that you have captured the SQL it is time to query the results.
This first query will list all query hashes that have been executed:
select aft.hash_value
from sql_exec_after aft
left outer join sql_exec_before bef
on aft.hash_value = bef.hash_value
where aft.executions > bef.executions
or bef.executions is null;
/
This one will display the hash and the SQL itself:
set pages 999 lines 100
break on hash_value
select hash_value, sql_text
from v$sqltext
where hash_value in (
select aft.hash_value
from sql_exec_after aft
left outer join sql_exec_before bef
on aft.hash_value = bef.hash_value
where aft.executions > bef.executions
or bef.executions is null;
)
order by
hash_value, piece
/
5.
Tidy up Don't forget to remove the snapshot tables once you've finished:
drop table sql_exec_before
/
drop table sql_exec_after
/
Oracle, along with other databases, analyzes a given query to create an execution plan. This plan is the most efficient way of retrieving the data.
Oracle provides the 'explain plan' statement which analyzes the query but doesn't run it, instead populating a special table that you can query (the plan table).
The syntax (simple version, there are other options such as to mark the rows in the plan table with a special ID, or use a different plan table) is:
explain plan for <sql query>
The analysis of that data is left for another question, or your further research.
There is a commercial tool FlexTracer which can be used to trace Oracle SQL queries
This is an Oracle doc explaining how to trace SQL queries, including a couple of tools (SQL Trace and tkprof)
link
Apparently there is no small simple cheap utility that would help performing this task. There is however 101 way to do it in a complicated and inconvenient manner.
Following article describes several. There are probably dozens more...
http://www.petefinnigan.com/ramblings/how_to_set_trace.htm