How Oracle GATHER_SCHEMA_STATS works - oracle

We have one of our system that perform quite a bit of database activity in terms of INSERT/UPDATE/DELETE statements against various tables. Because of this the statistics became stale and this is reflected in overall performance.
We want to create a scheduled job that would periodically invoke DBMS_STATS.GATHER_SCHEMA_STATS. Because we don't want actual stats gathering itself to impact the system processing even more we are thinking to collect statistics quite frequent and use GATHER STALE option:
DBMS_STATS.GATHER_SCHEMA_STATS(OWNNAME => 'MY_SCHEMA', OPTIONS =>'GATHER STALE')
This executes almost instantly but running this statement below before and after stats gathering seems to bring back the same records with the same values:
SELECT * FROM user_tab_modifications WHERE inserts + updates + deletes > 0;
The very short time taking to execute and the fact that user_tab_modifications content stays the same makes me question if OPTIONS =>'GATHER STALE' actually does what we expect it should do. On the other hand if I run this again before and after statistics gathering I can see the tables reported as stale before re no longer reported as stale after:
DECLARE
stale dbms_stats.objecttab;
BEGIN
DBMS_STATS.GATHER_SCHEMA_STATS(ownname => 'MY_SCHEMA', OPTIONS =>'LIST STALE', objlist => stale);
FOR i in 1 .. stale.count
LOOP
dbms_output.put_line( stale(i).objName );
END LOOP;
END;
On another hand if lets say my_table is one of my tables being listed as part of the tables that part of the user_tab_modifications with inserts + updates + deletes > 0 and I run I can see my_table no longer being reported as having changes.
EXECUTE DBMS_STATS.GATHER_TABLE_STATS(ownname => 'MY_SCHEMA', tabname => 'MY_TABLE');
So my questions are:
Is my approach correct. Can I trust I am getting fresh stats just by running options => 'GATHER STALE' or I should manually collect stats for all tables that come back with a reasonable number of inserts, updates, deletes?
When user_tab_modifications would actually get reset; obviously GATHER STALE option does not seem to do it
We are using Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

Got the following info from Oracle docs.
You should enable monitoring if you use GATHER_DATABASE_STATS or GATHER_SCHEMA_STATS with the GATHER AUTO or GATHER STALE options.
This view USER_TAB_MODIFICATIONS is populated only for tables with the MONITORING attribute. It is intended for statistics collection over a long period of time. For performance reasons, the Oracle Database does not populate this view immediately when the actual modifications occur. Run the FLUSH_DATABASE_MONITORING_INFO procedure in the DBMS_STATS PL/SQL package to populate this view with the latest information. The ANALYZE_ANY system privilege is required to run this procedure.
Hope this helps you to identify which of your assumptions are incorrect and understand the correct usage of "GATHER STALE".

Related

Auto Optimizer Stats Collection Job Causing Oracle RDS Database to Restart

We have a Oracle 19C database (19.0.0.0.ru-2021-04.rur-2021-04.r1) on AWS RDS which is hosted on an 4 CPU 32 GB RAM instance. The size of the database is not big (35 GB) and the PGA Aggregate Limit is 8GB & Target is 4GB. Whenever the scheduled internal Oracle Auto Optimizer Stats Collection Job (ORA$AT_OS_OPT_SY_nnn) runs then it consumes substantially high PGA memory (approx 7GB) and sometimes this makes database unstable and AWS loses communication with the RDS instance so it restarts the database.
We thought this may be linked to existing Oracle bug 30846782 (19C+: Fast/Excessive PGA growth when using DBMS_STATS.GATHER_TABLE_STATS) but Oracle & AWS had fixed it in the current 19C version we are using. There are no application level operations that consume this much PGA and the database restart have always happened when the Auto Optimizer Stats Collection Job was running. There are couple of more databases, which are on same version, where same pattern was observed and the database was restarted by AWS. We have disabled the job now on those databases to avoid further occurrence of this issue however we want to run this job as disabling it may cause old stats being available in the database.
Any pointers on how to tackle this issue?
I found the same issue in my AWS RDS Oracle 18c and 19c instances, even though I am not in the same patch level as you.
In my case, I applied this workaround and it worked.
SQL> alter system set "_fix_control"='20424684:OFF' scope=both;
However, before applying this change, I strongly suggest that you test it on your non production environments, and if you can, try to consult with Oracle Support. Dealing with hidden parameters might lead to unexpected side effects, so apply it at your own risk.
Instead of completely abandoning automatic statistics gathering, try find any specific objects that are causing the problem. If only a small number of tables are responsible for a large amount of statistics gathering, you can manually analyze those tables or change their preferences.
First, use the below SQL to see which objects are causing the most statistics gathering. According to the test case in bug 30846782, the problem seems to be only related to the number of times DBMS_STATS is called.
select *
from dba_optstat_operations
order by start_time desc;
In addition, you may be able to find specific SQL statements or sessions that generate a lot of PGA memory with the below query. (However, if the database restarts, it's possible that AWR won't save the recorded values.)
select username, event, sql_id, pga_allocated/1024/1024/1024 pga_allocated_gb, gv$active_session_history.*
from gv$active_session_history
join dba_users on gv$active_session_history.user_id = dba_users.user_id
where pga_allocated/1024/1024/1024 >= 1
order by sample_time desc;
If the problem is only related to a small number of tables with a large number of partitions, you can manually gather the stats on just that table in a separate session. Once the stats are gathered, the table won't be analyzed again until about 10% of the data is changed.
begin
dbms_stats.gather_table_stats(user, 'PGA_STATS_TEST');
end;
/
It's not uncommon for a database to spend a long time gathering statistics, but it is uncommon for a database to constantly analyze thousands of objects. Running into this bug implies there is something unusual about your database - are you constantly dropping and creating objects, or do you have a large number of objects that have 10% of their data modified every day? You may need to add a manual gather step to a few of your processes.
Turning off the automatic statistics job entirely will eventually cause many performance problems. Even if you can't add manual gathering steps, you may still want to keep the job enabled. For example, if tables are being analyzed too frequently, you may want to increase the table preference for the "STALE_PERCENT" threshold from 10% to 20%:
begin
dbms_stats.set_table_prefs
(
ownname => user,
tabname => 'PGA_STATS_TEST',
pname => 'STALE_PERCENT',
pvalue => '20'
);
end;
/

How to turn off stats on Global Temporary Table and impact

Hello I am not very experienced on Oracle DB administration, I have several queries that started to work really slow and all involve with temp table (12c). I see several posts talking about disable stats on GTT (Global Temporary Table), however I didn't find much but just following setting to disable
exec dbms_scheduler.disable('SYS.GATHER_STATS_JOB');
my question is any specific way to turn off stats ONLY to GTT, and what is the negative impact on above command?
This disables statistics gathering on a table
begin
dbms_stats.delete_table_stats('TABLE_OWNER', 'TABLE_NAME');
dbms_stats.lock_table_stats('TABLE_OWNER', 'TABLE_NAME');
end;
/
However a better idea is to collect statistics on the table when it is filled with representative data set:
begin
dbms_stats.gather_table_stats('TABLE_OWNER', 'TABLE_NAME');
dbms_stats.lock_table_stats('TABLE_OWNER', 'TABLE_NAME');
end;
/
https://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_stats.htm#i1043993

concurrent statistics gathering on Oracle 11g partiitioned table

I am developing a DWH on Oracle 11g. We have some big tables (250+ million rows), partitioned by value. Each partition is a assigned to a different feeding source, and every partition is independent from others, so they can be loaded and processed concurrently.
Data distribution is very uneven, we have partition with millions rows, and partitions with not more than a hundred rows, but I didn't choose the partitioning scheme, and by the way I can't change it.
Considered the data volume, we must assure that every partition has always up-to-date statistics, because if the subsequent elaborations don't have an optimal access to the data, they will last forever.
So for each concurrent ETL thread, we
Truncate the partition
Load data from staging area with
SELECT /*+ APPEND */ INTO big_table PARTITION(part1) FROM temp_table WHERE partition_colum = PART1
(this way we have direct path and we don't lock the whole table)
We gather statistics for the modified partition.
In the first stage of the project, we used the APPROX_GLOBAL_AND_PARTITION strategy and worked like a charm
dbms_stats.gather_table_stats(ownname=>myschema,
tabname=>big_table,
partname=>part1,
estimate_percent=>1,
granularity=>'APPROX_GLOBAL_AND_PARTITION',
CASCADE=>dbms_stats.auto_cascade,
degree=>dbms_stats.auto_degree)
But, we had the drawback that, when we loaded a small partition, the APPROX_GLOBAL part was dominant (still a lot faster than GLOBAL) , and for a small partition we had, e.g., 10 seconds of loading, and 20 minutes of statistics.
So we have been suggested to switch to the INCREMENTAL STATS feature of 11g, which means that you don't specify the partition you have modified, you leave all parameters in auto, and Oracle does it's magic, automatically understanding which partition(s) have been touched. And it actually works, we have really speeded up the small partition. After turning on the feature, the call became
dbms_stats.gather_table_stats(ownname=>myschema,
tabname=>big_table,
estimate_percent=>dbms_stats.auto_sample_size,
granularity=>'AUTO',
CASCADE=>dbms_stats.auto_cascade,
degree=>dbms_stats.auto_degree)
notice, that you don't pass the partition anymore, and you don't specify a sample percent.
But, we're having a drawback, maybe even worse that the previous one, and this is correlated with the high level of parallelism we have.
Let's say we have 2 big partition that starts at the same time, they will finish the load phase almost at the same time too.
The first thread ends the insert statement, commits, and launches the stats gathering. The stats procedure notices there are 2 partition modified (this is correct, one is full and the second is truncated, with a transaction in progress), updates correctly the stats for both the partitions.
Eventually the second partition ends, gather the stats, it see all partition already updated, and does nothing (this is NOT correct, because the second thread committed the data in the meanwhile).
The result is:
PARTITION NAME | LAST ANALYZED | NUM ROWS | BLOCKS | SAMPLE SIZE
-----------------------------------------------------------------------
PART1 | 04-MAR-2015 15:40:42 | 805731 | 20314 | 805731
PART2 | 04-MAR-2015 15:41:48 | 0 | 16234 | (null)
and the consequence is that I occasionally incur in not optimal plans (which mean killing the session, refresh manually the stats, manually launch the precess again).
I tried even putting an exclusive lock on the gathering, so no more than one thread can gather stats on the same table at once, but nothing changed.
IMHO this is an odd behaviour, because the stats procedure, the second time it is invoked, should check for the last commit on the second partition, and should see it's newer than the last stats gathering time. But seems it's not happening.
Am I doing something wrong? Is it an Oracle bug? How can I guarantee that all stats are always up-to-date with incremental stats feature turned on, and an high level of concurrency?
I managed to reach a decent compromise with this function.
PROCEDURE gather_tb_partiz(
p_tblname IN VARCHAR2,
p_partname IN VARCHAR2)
IS
v_stale all_tab_statistics.stale_stats%TYPE;
BEGIN
BEGIN
SELECT stale_stats
INTO v_stale
FROM user_tab_statistics
WHERE table_name = p_tblname
AND object_type = 'TABLE';
EXCEPTION
WHEN NO_DATA_FOUND THEN
v_stale := 'YES';
END;
IF v_stale = 'YES' THEN
dbms_stats.gather_table_stats(ownname=>myschema,
tabname=> p_tblname,
partname=>p_partname,
degree=>dbms_stats.auto_degree,
granularity=>'APPROX_GLOBAL AND PARTITION') ;
ELSE
dbms_stats.gather_table_stats(ownname=>myschema,
tabname=>p_tblname,
partname=>p_partname,
degree=>dbms_stats.auto_degree,
granularity=>'PARTITION') ;
END IF;
END gather_tb_partiz;
At the end of each ETL, if the number of added/deleted/modified rows is low enough not to mark the table as stale (10% by default, can be tuned with STALE_PERCENT parameter), I collect only partition statistics; otherwise i collect global and partition statistics.
This keeps ETL of small partition fast, because no global partition must be regathered, and big partition safe, because any subsequent query will have fresh statistics and will likely use an optimal plan.
Incremental stats is anyway enabled, so whenever the global has to be recalculated, it is pretty fast because aggregates partition level statistics and does not perform a full scan.
I am not sure if, with incremental enabled, "APPROX_GLOBAL AND PARTITION" and "GLOBAL AND PARTITION" do differ in something, because both incremental and approx do basically the same thing: aggregate stats and histograms without doing a full scan.
Have you tried to have incremental statistics on, but still explicitly name a partition to analyze?
dbms_stats.gather_table_stats(ownname=>myschema,
tabname=>big_table,
partname=>part,
degree=>dbms_stats.auto_degree);
For your table, stale (yesterday's) global stats are not as harmful as completely invalid partition stats (0 rows). I can propose 2 a bit alternative approaches that we use:
Have a separate GLOBAL stats gathering executed by your ETL tool right after all partitions are loaded. If it's taking too long, play with estimate_percent as dbms_stats.auto_degree will likely to be more than 1%
Gather the global (as well as all other stale) stats in a separate database job run later during the day, after all data is loaded into DW.
The key point is that stale statistics which differ only slightly from fresh are almost just as good. If statistics show you 0 rows, they'll kill any query.
Considering what you are trying to achieve, you need to run stats on specific intervals of time for all Partitions and not at the end of the process that loads each partition. It could be challenging if this is a live table and has constant data loads happening round the clock, but since these are LARGE DW tables I really doubt that's the case. So the best bet would be to collect stats at the end of loading all partitions, this will ensure that the statistics is collected for partitions where data has change or statistics are missing and update the global statistics based on the partition level statistics and synopsis.
However to do so, you need to turn on incremental feature for the table (11gR1).
EXEC DBMS_STATS.SET_TABLE_PREFS('<Owner>','BIG_TABLE','INCREMENTAL','TRUE');
At the end of every load, gather table statistics using GATHER_TABLE_STATS command. You don't need to specify the partition name. Also, do not specify the granularity parameter.
EXEC DBMS_STATS.GATHER_TABLE_STATS('<Owner>','BIG_TABLE');
Kindly check if you have used DBMS_STATS to set table preference to gather incremental statistics.This oracle blog explains that statistics will be gathered after each row affected.
Incremental statistics maintenance needs to gather statistics on any partition that will change the global or table level statistics. For instance, the min or max value for a column could change after just one row is inserted or updated in the table
BEGIN
DBMS_STATS.SET_TABLE_PREFS(myschema,'BIG_TABLE','INCREMENTAL','TRUE');
END;
I'm a bit rusty about it, so first of all a question:
did you try serializing partition loading? If so, how long and how well does statistics run? Notice that since loading time is so much smaller than statistics gathering, i guess this could also act as a temporary workaround.
Append hint does affects redo size, meaning the transaction just traces something, thus statistics may not reckon new data:
http://oracle-base.com/articles/misc/append-hint.php
Thinking out loud: since the direct path insert does append rows at the end of the partition and eventually updates metadata at the end, the already running thread gathering statistics could have read non-updated (stale) data. Thus it may not be a bug, and locking threads would accomplish nothing.
You may test this behaviour temporarily switching your table/partition to LOGGING, for instance, and see how it works (slower, of course, but it's a test). Can you do it?
EDIT: incremental stats should work anyway, even disabling a parallel statistics gathering, since it reiles on the incremental values no matter how they were collected:
https://blogs.oracle.com/optimizer/entry/incremental_statistics_maintenance_what_statistics

Sql vs Oracle - profiler [duplicate]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
i work with sql server, but i must migrate to an application with Oracle DB.
for trace my application queries, in Sql Server i use wonderful Profiler tool. is there something of equivalent for Oracle?
I found an easy solution
Step1. connect to DB with an admin user using PLSQL or sqldeveloper or any other query interface
Step2. run the script bellow; in the S.SQL_TEXT column, you will see the executed queries
SELECT
S.LAST_ACTIVE_TIME,
S.MODULE,
S.SQL_FULLTEXT,
S.SQL_PROFILE,
S.EXECUTIONS,
S.LAST_LOAD_TIME,
S.PARSING_USER_ID,
S.SERVICE
FROM
SYS.V_$SQL S,
SYS.ALL_USERS U
WHERE
S.PARSING_USER_ID=U.USER_ID
AND UPPER(U.USERNAME) IN ('oracle user name here')
ORDER BY TO_DATE(S.LAST_LOAD_TIME, 'YYYY-MM-DD/HH24:MI:SS') desc;
The only issue with this is that I can't find a way to show the input parameters values(for function calls), but at least we can see what is ran in Oracle and the order of it without using a specific tool.
You can use The Oracle Enterprise Manager to monitor the active sessions, with the query that is being executed, its execution plan, locks, some statistics and even a progress bar for the longer tasks.
See: http://download.oracle.com/docs/cd/B10501_01/em.920/a96674/db_admin.htm#1013955
Go to Instance -> sessions and watch the SQL Tab of each session.
There are other ways. Enterprise manager just puts with pretty colors what is already available in specials views like those documented here:
http://www.oracle.com/pls/db92/db92.catalog_views?remark=homepage
And, of course you can also use Explain PLAN FOR, TRACE tool and tons of other ways of instrumentalization. There are some reports in the enterprise manager for the top most expensive SQL Queries. You can also search recent queries kept on the cache.
alter system set timed_statistics=true
--or
alter session set timed_statistics=true --if want to trace your own session
-- must be big enough:
select value from v$parameter p
where name='max_dump_file_size'
-- Find out sid and serial# of session you interested in:
select sid, serial# from v$session
where ...your_search_params...
--you can begin tracing with 10046 event, the fourth parameter sets the trace level(12 is the biggest):
begin
sys.dbms_system.set_ev(sid, serial#, 10046, 12, '');
end;
--turn off tracing with setting zero level:
begin
sys.dbms_system.set_ev(sid, serial#, 10046, 0, '');
end;
/*possible levels:
0 - turned off
1 - minimal level. Much like set sql_trace=true
4 - bind variables values are added to trace file
8 - waits are added
12 - both bind variable values and wait events are added
*/
--same if you want to trace your own session with bigger level:
alter session set events '10046 trace name context forever, level 12';
--turn off:
alter session set events '10046 trace name context off';
--file with raw trace information will be located:
select value from v$parameter p
where name='user_dump_dest'
--name of the file(*.trc) will contain spid:
select p.spid from v$session s, v$process p
where s.paddr=p.addr
and ...your_search_params...
--also you can set the name by yourself:
alter session set tracefile_identifier='UniqueString';
--finally, use TKPROF to make trace file more readable:
C:\ORACLE\admin\databaseSID\udump>
C:\ORACLE\admin\databaseSID\udump>tkprof my_trace_file.trc output=my_file.prf
TKPROF: Release 9.2.0.1.0 - Production on Wed Sep 22 18:05:00 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
C:\ORACLE\admin\databaseSID\udump>
--to view state of trace file use:
set serveroutput on size 30000;
declare
ALevel binary_integer;
begin
SYS.DBMS_SYSTEM.Read_Ev(10046, ALevel);
if ALevel = 0 then
DBMS_OUTPUT.Put_Line('sql_trace is off');
else
DBMS_OUTPUT.Put_Line('sql_trace is on');
end if;
end;
/
Just kind of translated http://www.sql.ru/faq/faq_topic.aspx?fid=389 Original is fuller, but anyway this is better than what others posted IMHO
GI Oracle Profiler v1.2
It's a Tools for Oracle to capture queries executed similar to the SQL Server Profiler.
Indispensable tool for the maintenance of applications that use this database server.
you can download it from the official site iacosoft.com
Try PL/SQL Developer it has a nice user friendly GUI interface to the profiler. It's pretty nice give the trial a try. I swear by this tool when working on Oracle databases.
http://www.allroundautomations.com/plsqldev.html?gclid=CM6pz8e04p0CFQjyDAodNXqPDw
Seeing as I've just voted a recent question as a duplicate and pointed in this direction . . .
A couple more - in SQL*Plus - SET AUTOTRACE ON - will give explain plan and statistics for each statement executed.
TOAD also allows for client side profiling.
The disadvantage of both of these is that they only tell you the execution plan for the statement, but not how the optimiser arrived at that plan - for that you will need lower level server side tracing.
Another important one to understand is Statspack snapshots - they are a good way for looking at the performance of the database as a whole. Explain plan, etc, are good at finding individual SQL statements that are bottlenecks. Statspack is good at identifying the fact your problem is that a simple statement with a good execution plan is being called 1 million times in a minute.
The Catch is Capture all SQL run between two points in time. Like the way SQL Server also does.
There are situations where it is useful to capture the SQL that a particular user is running in the database. Usually you would simply enable session tracing for that user, but there are two potential problems with that approach.
The first is that many web based applications maintain a pool of persistent database connections which are shared amongst multiple users.
The second is that some applications connect, run some SQL and disconnect very quickly, making it tricky to enable session tracing at all (you could of course use a logon trigger to enable session tracing in this case).
A quick and dirty solution to the problem is to capture all SQL statements that are run between two points in time.
The following procedure will create two tables, each containing a snapshot of the database at a particular point. The tables will then be queried to produce a list of all SQL run during that period.
If possible, you should do this on a quiet development system - otherwise you risk getting way too much data back.
Take the first snapshot
Run the following sql to create the first snapshot:
create table sql_exec_before as
select executions,hash_value
from v$sqlarea
/
Get the user to perform their task within the application.
Take the second snapshot.
create table sql_exec_after as
select executions, hash_value
from v$sqlarea
/
Check the results
Now that you have captured the SQL it is time to query the results.
This first query will list all query hashes that have been executed:
select aft.hash_value
from sql_exec_after aft
left outer join sql_exec_before bef
on aft.hash_value = bef.hash_value
where aft.executions > bef.executions
or bef.executions is null;
/
This one will display the hash and the SQL itself:
set pages 999 lines 100
break on hash_value
select hash_value, sql_text
from v$sqltext
where hash_value in (
select aft.hash_value
from sql_exec_after aft
left outer join sql_exec_before bef
on aft.hash_value = bef.hash_value
where aft.executions > bef.executions
or bef.executions is null;
)
order by
hash_value, piece
/
5.
Tidy up Don't forget to remove the snapshot tables once you've finished:
drop table sql_exec_before
/
drop table sql_exec_after
/
Oracle, along with other databases, analyzes a given query to create an execution plan. This plan is the most efficient way of retrieving the data.
Oracle provides the 'explain plan' statement which analyzes the query but doesn't run it, instead populating a special table that you can query (the plan table).
The syntax (simple version, there are other options such as to mark the rows in the plan table with a special ID, or use a different plan table) is:
explain plan for <sql query>
The analysis of that data is left for another question, or your further research.
There is a commercial tool FlexTracer which can be used to trace Oracle SQL queries
This is an Oracle doc explaining how to trace SQL queries, including a couple of tools (SQL Trace and tkprof)
link
Apparently there is no small simple cheap utility that would help performing this task. There is however 101 way to do it in a complicated and inconvenient manner.
Following article describes several. There are probably dozens more...
http://www.petefinnigan.com/ramblings/how_to_set_trace.htm

Oracle tuning / analyze tables

What are the means to schedule automatic "analyze tables". Is it possible to request an automatic "analyze tables" when a lot of data change througt insert and deletes? What are the means to parametrize the automatic analyze tables process, i.e. to set rules when it should be triggered.
What version of Oracle are you using? From 10.1 on, Oracle has shipped with an automatic job that gathers statistics every night on any object that has been changed substantially (based on an internal algorithm subject to change, but I believe the threshold is ~15%) since that last time statistics were gathered. Assuming you are using a currently supported release, statistics are gathered automatically by default unless you explicitly disable that job.
You can use the DBMS_STATS package to gather statistics on an object based on an event and you can use the DBMS_JOB (or DBMS_SCHEDULER) package to have that done asynchronously. So at the end of an ETL process, you can gather statistics
BEGIN
dbms_stats.gather_table_stats(
ownname => <<schema name>>,
tabname => <<table name>>
);
END;
You can have that run asynchronously
DECLARE
l_jobno INTEGER;
BEGIN
dbms_job.submit(
l_jobno,
'BEGIN dbms_stats.gather_table_stats( ''<<schema name>>'', ''<<table name>>'' ); END;',
sysdate + interval '1' minute
);
END;
"Is there a way to have this job being triggered when there is a substantial proportion of data changed"
In theory you could do an AFTER INSERT trigger on a table that automatically sets off DBMS_STATS.
But be careful what you wish for. If you are in the middle of a large ETL job, having just inserted a million rows into the table, you don't necessarily want to automatically immediately set off a DBMS_STATS job. Equally, when you are at your busiest on-line processing time (eg lunch) you don't want to slow it down by gathering stats at the same time.
You generally want these stats gathering jobs running at low-usage periods (nights, weekends). If you have a large batch run that loads into a lot of tables, I'd build the DBMS_STATS into the batch.

Resources