Is optimizer_use_sql_plan_baselines and resource_manager_cpu_allocation oracle system parameter have impact on sql query performance.
We have two envt suppose A and B. On A Envt query is running fine but in Envt. B its tacking time. I have compared system parameter and found difference in values in optimizer_use_sql_plan_baselines and resource_manager_cpu_allocation .
SQL plan baselines and the resource manager certainly could have a huge impact on performance, and you should use the below two queries or confirm or deny that those parameters are related to your problem.
GV$SQL stores which SQL plan baseline is associated with each SQL statement. Compare the SQL_PLAN_BASELINE column in the below query, and if they are equal then your problem is not related to baselines:
select sql_plan_baseline, round(elapsed_time/1000000) elapsed_seconds, gv$sql.*
from gv$sql
order by elapsed_time desc;
The Active Session History (ASH) views can tell you if the resource manager is an issue. If your queries are being throttled then you will see an event
named "resmgr:cpu quantum" in the below query. (But pay attention to the counts - don't troubleshoot a wait event if it only happens a small number of times.)
select nvl(event, 'CPU') event, count(*)
from gv$active_session_history
group by event
order by count(*) desc;
Resource manager can have other potentially negative affects. If you're in a data warehouse, and using parallel queries, it's possible that resource manager has downgraded the queries on one system. If you're using parallel queries, try comparing the SQL monitoring reports from both systems:
select dbms_sqltune.report_sql_monitor(sql_id => '&YOUR_SQL_ID') from dual;
However, I have a feeling that you're using the wrong approach for your problem. There are generally two approaches to Oracle database performance - database tuning and query tuning. If you're only interested in a single query, then you should probably focus on things like the execution plan and the wait events for the operations of that specific query.
We have a replication with goldengate from a prod environment.
The tables got initial dumped from the prod and afterwards we started the replication with goldengate. Now we want to migrate the data to another database. But the query plans are different from the prod environment. We think it is because all statistics from the database of the replication are broken/wrong.
The number of rows stated in dba_tables are null, 0 or differs 50-80%.
We tried to do dbms_stats.gather_table_stats on all relevant tables.
It's still broken. We run that querie for all tables that had wrong statistics:
dbms_stats.GATHER_TABLE_STATS(OWNNAME => 'SCHEMA', TABNAME => 'TABLE_NAME', CASCADE => true);
We can't migrate with the bad queryplans.
We are using Oracle Release 12.2.0.1.0 - Production
EDIT: After the answer of #Jon Heller we saw that some indices are partitioned in the prod environment not in the replication. Additionally the global preference DEGREE is 32768 on the replication and NULL on prod.
Are the tables built exactly the same way? Maybe a different table structure is causing the statistics to break, like if one table is partitioned and another is not. Try comparing the DDL:
select dbms_metadata.get_ddl('TABLE', 'TABLE1') from dual;
I'm surprised to hear that statistics are wrong even after gathering stats. Especially the number of rows - since 10g, that number should always be 100% accurate with the default settings.
Can you list the exact commands you are using to gather stats? Also, this is a stretch, but possibly the global preference were changed on one database. It would be pretty evil, but you could set a database default to only look at 0.00001% of the data, which would create terrible statistics. Check your global preferences between both databases.
--Thanks to Tim Hall for this query: https://oracle-base.com/dba/script?category=monitoring&file=statistics_prefs.sql
SELECT DBMS_STATS.GET_PREFS('AUTOSTATS_TARGET') AS autostats_target,
DBMS_STATS.GET_PREFS('CASCADE') AS cascade,
DBMS_STATS.GET_PREFS('DEGREE') AS degree,
DBMS_STATS.GET_PREFS('ESTIMATE_PERCENT') AS estimate_percent,
DBMS_STATS.GET_PREFS('METHOD_OPT') AS method_opt,
DBMS_STATS.GET_PREFS('NO_INVALIDATE') AS no_invalidate,
DBMS_STATS.GET_PREFS('GRANULARITY') AS granularity,
DBMS_STATS.GET_PREFS('PUBLISH') AS publish,
DBMS_STATS.GET_PREFS('INCREMENTAL') AS incremental,
DBMS_STATS.GET_PREFS('STALE_PERCENT') AS stale_percent
FROM dual;
If gathering statistics still leads to different results, the only thing I can think of is corruption. It may be time to create an Oracle service request.
(This is more of an extended comment than an answer, but it might take a lot of code to diagnose this problem. Please update the original question with more information as you find it.)
We have one of our system that perform quite a bit of database activity in terms of INSERT/UPDATE/DELETE statements against various tables. Because of this the statistics became stale and this is reflected in overall performance.
We want to create a scheduled job that would periodically invoke DBMS_STATS.GATHER_SCHEMA_STATS. Because we don't want actual stats gathering itself to impact the system processing even more we are thinking to collect statistics quite frequent and use GATHER STALE option:
DBMS_STATS.GATHER_SCHEMA_STATS(OWNNAME => 'MY_SCHEMA', OPTIONS =>'GATHER STALE')
This executes almost instantly but running this statement below before and after stats gathering seems to bring back the same records with the same values:
SELECT * FROM user_tab_modifications WHERE inserts + updates + deletes > 0;
The very short time taking to execute and the fact that user_tab_modifications content stays the same makes me question if OPTIONS =>'GATHER STALE' actually does what we expect it should do. On the other hand if I run this again before and after statistics gathering I can see the tables reported as stale before re no longer reported as stale after:
DECLARE
stale dbms_stats.objecttab;
BEGIN
DBMS_STATS.GATHER_SCHEMA_STATS(ownname => 'MY_SCHEMA', OPTIONS =>'LIST STALE', objlist => stale);
FOR i in 1 .. stale.count
LOOP
dbms_output.put_line( stale(i).objName );
END LOOP;
END;
On another hand if lets say my_table is one of my tables being listed as part of the tables that part of the user_tab_modifications with inserts + updates + deletes > 0 and I run I can see my_table no longer being reported as having changes.
EXECUTE DBMS_STATS.GATHER_TABLE_STATS(ownname => 'MY_SCHEMA', tabname => 'MY_TABLE');
So my questions are:
Is my approach correct. Can I trust I am getting fresh stats just by running options => 'GATHER STALE' or I should manually collect stats for all tables that come back with a reasonable number of inserts, updates, deletes?
When user_tab_modifications would actually get reset; obviously GATHER STALE option does not seem to do it
We are using Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
Got the following info from Oracle docs.
You should enable monitoring if you use GATHER_DATABASE_STATS or GATHER_SCHEMA_STATS with the GATHER AUTO or GATHER STALE options.
This view USER_TAB_MODIFICATIONS is populated only for tables with the MONITORING attribute. It is intended for statistics collection over a long period of time. For performance reasons, the Oracle Database does not populate this view immediately when the actual modifications occur. Run the FLUSH_DATABASE_MONITORING_INFO procedure in the DBMS_STATS PL/SQL package to populate this view with the latest information. The ANALYZE_ANY system privilege is required to run this procedure.
Hope this helps you to identify which of your assumptions are incorrect and understand the correct usage of "GATHER STALE".
I recently moved a set of my Oracle SQL queries to a package with table functions where I am returning data pipelined.
However, I started observing something unusual. V$SQL stats like buffer_gets, fetches, cpu_time, execution_time etc, have started showing a cumulative number, and they keep getting higher with every execution of the queries.
Is this usual behavior?
That's expected behavior.
There's a lots of shared resources involved. If you have an application with a dozen database connections in a pool, all doing the same sort of work, then you can have SQLs that have multiple users fetching from them at the same time (eg Connection 1 getting the Invoices for ABC while Connection 2 is getting invoices for XYZ). V$SQL (or GV$SQL in a RAC environment) will show the cumulative totals for most details (and things like USERS_EXECUTING will be current values across all sessions).
The only time they'll get reset is when the SQL ages out of the shared area, typically when it hasn't being used for some time and the resources are needed for another SQL, or rare events like a database shutdown.
I am developing a DWH on Oracle 11g. We have some big tables (250+ million rows), partitioned by value. Each partition is a assigned to a different feeding source, and every partition is independent from others, so they can be loaded and processed concurrently.
Data distribution is very uneven, we have partition with millions rows, and partitions with not more than a hundred rows, but I didn't choose the partitioning scheme, and by the way I can't change it.
Considered the data volume, we must assure that every partition has always up-to-date statistics, because if the subsequent elaborations don't have an optimal access to the data, they will last forever.
So for each concurrent ETL thread, we
Truncate the partition
Load data from staging area with
SELECT /*+ APPEND */ INTO big_table PARTITION(part1) FROM temp_table WHERE partition_colum = PART1
(this way we have direct path and we don't lock the whole table)
We gather statistics for the modified partition.
In the first stage of the project, we used the APPROX_GLOBAL_AND_PARTITION strategy and worked like a charm
dbms_stats.gather_table_stats(ownname=>myschema,
tabname=>big_table,
partname=>part1,
estimate_percent=>1,
granularity=>'APPROX_GLOBAL_AND_PARTITION',
CASCADE=>dbms_stats.auto_cascade,
degree=>dbms_stats.auto_degree)
But, we had the drawback that, when we loaded a small partition, the APPROX_GLOBAL part was dominant (still a lot faster than GLOBAL) , and for a small partition we had, e.g., 10 seconds of loading, and 20 minutes of statistics.
So we have been suggested to switch to the INCREMENTAL STATS feature of 11g, which means that you don't specify the partition you have modified, you leave all parameters in auto, and Oracle does it's magic, automatically understanding which partition(s) have been touched. And it actually works, we have really speeded up the small partition. After turning on the feature, the call became
dbms_stats.gather_table_stats(ownname=>myschema,
tabname=>big_table,
estimate_percent=>dbms_stats.auto_sample_size,
granularity=>'AUTO',
CASCADE=>dbms_stats.auto_cascade,
degree=>dbms_stats.auto_degree)
notice, that you don't pass the partition anymore, and you don't specify a sample percent.
But, we're having a drawback, maybe even worse that the previous one, and this is correlated with the high level of parallelism we have.
Let's say we have 2 big partition that starts at the same time, they will finish the load phase almost at the same time too.
The first thread ends the insert statement, commits, and launches the stats gathering. The stats procedure notices there are 2 partition modified (this is correct, one is full and the second is truncated, with a transaction in progress), updates correctly the stats for both the partitions.
Eventually the second partition ends, gather the stats, it see all partition already updated, and does nothing (this is NOT correct, because the second thread committed the data in the meanwhile).
The result is:
PARTITION NAME | LAST ANALYZED | NUM ROWS | BLOCKS | SAMPLE SIZE
-----------------------------------------------------------------------
PART1 | 04-MAR-2015 15:40:42 | 805731 | 20314 | 805731
PART2 | 04-MAR-2015 15:41:48 | 0 | 16234 | (null)
and the consequence is that I occasionally incur in not optimal plans (which mean killing the session, refresh manually the stats, manually launch the precess again).
I tried even putting an exclusive lock on the gathering, so no more than one thread can gather stats on the same table at once, but nothing changed.
IMHO this is an odd behaviour, because the stats procedure, the second time it is invoked, should check for the last commit on the second partition, and should see it's newer than the last stats gathering time. But seems it's not happening.
Am I doing something wrong? Is it an Oracle bug? How can I guarantee that all stats are always up-to-date with incremental stats feature turned on, and an high level of concurrency?
I managed to reach a decent compromise with this function.
PROCEDURE gather_tb_partiz(
p_tblname IN VARCHAR2,
p_partname IN VARCHAR2)
IS
v_stale all_tab_statistics.stale_stats%TYPE;
BEGIN
BEGIN
SELECT stale_stats
INTO v_stale
FROM user_tab_statistics
WHERE table_name = p_tblname
AND object_type = 'TABLE';
EXCEPTION
WHEN NO_DATA_FOUND THEN
v_stale := 'YES';
END;
IF v_stale = 'YES' THEN
dbms_stats.gather_table_stats(ownname=>myschema,
tabname=> p_tblname,
partname=>p_partname,
degree=>dbms_stats.auto_degree,
granularity=>'APPROX_GLOBAL AND PARTITION') ;
ELSE
dbms_stats.gather_table_stats(ownname=>myschema,
tabname=>p_tblname,
partname=>p_partname,
degree=>dbms_stats.auto_degree,
granularity=>'PARTITION') ;
END IF;
END gather_tb_partiz;
At the end of each ETL, if the number of added/deleted/modified rows is low enough not to mark the table as stale (10% by default, can be tuned with STALE_PERCENT parameter), I collect only partition statistics; otherwise i collect global and partition statistics.
This keeps ETL of small partition fast, because no global partition must be regathered, and big partition safe, because any subsequent query will have fresh statistics and will likely use an optimal plan.
Incremental stats is anyway enabled, so whenever the global has to be recalculated, it is pretty fast because aggregates partition level statistics and does not perform a full scan.
I am not sure if, with incremental enabled, "APPROX_GLOBAL AND PARTITION" and "GLOBAL AND PARTITION" do differ in something, because both incremental and approx do basically the same thing: aggregate stats and histograms without doing a full scan.
Have you tried to have incremental statistics on, but still explicitly name a partition to analyze?
dbms_stats.gather_table_stats(ownname=>myschema,
tabname=>big_table,
partname=>part,
degree=>dbms_stats.auto_degree);
For your table, stale (yesterday's) global stats are not as harmful as completely invalid partition stats (0 rows). I can propose 2 a bit alternative approaches that we use:
Have a separate GLOBAL stats gathering executed by your ETL tool right after all partitions are loaded. If it's taking too long, play with estimate_percent as dbms_stats.auto_degree will likely to be more than 1%
Gather the global (as well as all other stale) stats in a separate database job run later during the day, after all data is loaded into DW.
The key point is that stale statistics which differ only slightly from fresh are almost just as good. If statistics show you 0 rows, they'll kill any query.
Considering what you are trying to achieve, you need to run stats on specific intervals of time for all Partitions and not at the end of the process that loads each partition. It could be challenging if this is a live table and has constant data loads happening round the clock, but since these are LARGE DW tables I really doubt that's the case. So the best bet would be to collect stats at the end of loading all partitions, this will ensure that the statistics is collected for partitions where data has change or statistics are missing and update the global statistics based on the partition level statistics and synopsis.
However to do so, you need to turn on incremental feature for the table (11gR1).
EXEC DBMS_STATS.SET_TABLE_PREFS('<Owner>','BIG_TABLE','INCREMENTAL','TRUE');
At the end of every load, gather table statistics using GATHER_TABLE_STATS command. You don't need to specify the partition name. Also, do not specify the granularity parameter.
EXEC DBMS_STATS.GATHER_TABLE_STATS('<Owner>','BIG_TABLE');
Kindly check if you have used DBMS_STATS to set table preference to gather incremental statistics.This oracle blog explains that statistics will be gathered after each row affected.
Incremental statistics maintenance needs to gather statistics on any partition that will change the global or table level statistics. For instance, the min or max value for a column could change after just one row is inserted or updated in the table
BEGIN
DBMS_STATS.SET_TABLE_PREFS(myschema,'BIG_TABLE','INCREMENTAL','TRUE');
END;
I'm a bit rusty about it, so first of all a question:
did you try serializing partition loading? If so, how long and how well does statistics run? Notice that since loading time is so much smaller than statistics gathering, i guess this could also act as a temporary workaround.
Append hint does affects redo size, meaning the transaction just traces something, thus statistics may not reckon new data:
http://oracle-base.com/articles/misc/append-hint.php
Thinking out loud: since the direct path insert does append rows at the end of the partition and eventually updates metadata at the end, the already running thread gathering statistics could have read non-updated (stale) data. Thus it may not be a bug, and locking threads would accomplish nothing.
You may test this behaviour temporarily switching your table/partition to LOGGING, for instance, and see how it works (slower, of course, but it's a test). Can you do it?
EDIT: incremental stats should work anyway, even disabling a parallel statistics gathering, since it reiles on the incremental values no matter how they were collected:
https://blogs.oracle.com/optimizer/entry/incremental_statistics_maintenance_what_statistics