Will DBMS_STATS analyze the table? - oracle

I use DBMS_STATS.GATHER_SCHEMA_STATS procedure to collect statistics in my code.
BEGIN
DBMS_STATS.gather_schema_stats
(ownname => '<SCHEMA NAME',
estimate_percent => DBMS_STATS.auto_sample_size,
options => 'GATHER STALE',
DEGREE => NULL,
CASCADE => DBMS_STATS.auto_cascade,
granularity => 'auto'
);
end;
My question: After I executed this code, i checked the last analyzed date of tables using TOAD tool and it is showing some other past date instead of the current date when i ran the DBMS_STATS.GATHER_SCHEMA_STATS procedure.
Does this mean that the DBMS_STATS will not analyze the table?

When you specify the option GATHER STALE, you are telling Oracle that you only want to gather statistics on objects that have undergone "significant" change since the last time statistics were gathered on that object. If Oracle determines that the table has not changed much since statistics were last gathered, it will not gather statistics again.
Oracle determines that tables have changed significantly by monitoring DML on those tables (Oracle by default monitors tables in 11g-- you had to enable monitoring in earlier versions). That data is written to DBA_TAB_MODIFICATIONS periodically (roughly every few hours). When DBMS_STATS runs with the GATHER STALE option, Oracle compares the approximate number of changes in DBA_TAB_MODIFICATIONS against the prior statistics for the table to determine whether enough rows have changed to make it worthwhile to gather statistics again.

Related

How DBMS_STATS.GATHER_TABLE_STATS works in oracle

I need clarification on how DBMS_STATS.GATHER_TABLE_STATS works.
I have a scenario,were i am creating table and the same table is used in the package creation also.In other words the package compilation is depended on table creation.
Is it mandatory to include DBMS_STATS.GATHER_TABLE_STATS after index creation command in table creation script.
In which situation DBMS_STATS.GATHER_TABLE_STATS works,whether it's for package compilation or package execution.Please confirm.
DBMS_STATS provides information to the Oracle optimizer, enabling Oracle to build efficient execution plans for SQL statements. Optimizer statistics are never necessary for compiling objects.
Gathering statistics is such a complicated subject it can be difficult to even ask the right questions at first. I'd recommend starting by reading the manual, such as Managing Optimizer Statistics: Basic Topics.
Typically your statistics gathering strategy should look something like this:
Gather statistics manually whenever an object's data significantly changes.
Use the default autotask to automatically gather statistics for objects that have been slowly changing or were accidentally forgotten.
To answer some specific questions, you probably do not want to gather statistics right after creating a table. Only gather statistics after changing the data. Gathering statistics on an empty table can be dangerous; Oracle will think the table has 0 rows until the next time statistics are gathered. It would be better to have no statistics at all, then Oracle can use dynamic sampling to estimate the statistics.
(But on the other hand, if the table is created as a SELECT statement and contains a lot of data you may want to gather statistics. Then again, if you're using 12c it might gather statistics automatically when creating a table.)
You also generally do not need to gather statistics after creating an index. Oracle automatically gathers index statistics when an index is created or rebuilt.

Oracle Is it necessary to gather table stats for a new table with new index?

I just created a new table on 11gR2 and loaded it with data with no index. After the loading was completed, I created several indexes on the new table including primary constraint.
CREATE TABLE xxxx (col1 varchar2(20), ...., coln varhcar2(10));
INSERT INTO xxxx SELECT * FROM another_table;
ALTER TABLE xxxx ADD CONSTRAINT xxxc PRIMARY KEY(col_list);
CREATE INDEX xxxx_idx1 ON xxxx (col3,col4);
AT this point do I still need to use DBMS_STATS.GATHER_TABLE_STATS(v_owner,'XXXX') to gather table stats?
If yes, why? since Oracle says in docs "Oracle Database now automatically collects statistics during index creation and rebuild".
I don't want to wait for automatic stats gathering over night because I need to report the actual size of the table and its index immediately after the above operations. I think running DBMS_STATS.GATHER_TABLE_STATS may give me a more accurate usage data. I could be wrong though.
Thanks in advance,
In Oracle 11gR2 you still need to gather table statistics. I guess you read documentation for Oracle 12c, which automatically collects the statistics but only for direct path inserts, which is not your case, your insert is conventional. Also if you gather statistics (with default options) for brand new table that hasn't been used for queries no histograms will be generated.
Index statistics are gathered when index is built so it's not needed to gather its statistics explicitly. When you later gather table statistics you should use the DBMS_STATS.GATHER_TABLE_STATS option cascade => false so that index statistics aren't gathered twice.
You can simply check the statistics using
SELECT * FROM ALL_TAB_COL_STATISTICS WHERE TABLE_NAME = 'XXXX';

Gather_table_stats always updates stats

In ODI we used the DBMS_STATS.GATHER_SCHEMA_STATS to recompute the stats only when a table changed by a certain percentage with the option (options => 'GATHER AUTO'). (http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_stats.htm#i1036456)
Now I want to move the calculation of statistics to the table level (in the IKL) but DBMS_STATS.GATHER_TABLE_STATS does not seem to have a setting to only recompute the stats if they need an update (determined by Oracle).
(http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_stats.htm#i1036461)
Always recomputing the statistics of all our tables is too costly.
Does anyone know a way to check if a table needs its statistics updated or a hidden option in DBMS_STATS.GATHER_TABLE_STATS.
DBMS_STATS.GATHER_SCHEMA_STATS has an option to LIST STALE objects; you could run that first and decide if your table is in the list of returned objects.
Check the column STALE_STATS from USER_TAB_STATISTICS / ALL_TAB_STATISTICS / DBA_TAB_STATISTICS

Verify an Oracle database rollback action is successful

How can I verify an Oracle database rollback action is successful? Can I use Number of rows in activity log and Number of rows in event log?
V$TRANSACTION does not contain historical information but it does contain information about all active transactions. In practice this is often enough to quickly and easily monitor rollbacks and estimate when they will complete.
Specifically the columns USED_UBLK and USED_UREC contain the number of UNDO blocks and records remaining. USED_UREC is not always the same as the number of rows; sometimes the number is higher because it includes index entries and sometimes the number is lower because it groups inserts together.
During a long rollback those numbers will decrease until they hit 0. No rows in the table imply that the transactions successfully committed or rolled back. Below is a simple example.
create table table1(a number);
create index table1_idx on table1(a);
insert into table1 values(1);
insert into table1 values(1);
insert into table1 values(1);
select used_ublk, used_urec, ses_addr from v$transaction;
USED_UBLK USED_UREC SES_ADDR
--------- --------- --------
1 6 000007FF1C5A8EA0
Oracle LogMiner, which is part of Oracle Database, enables you to query online and archived redo log files through a SQL interface. Redo log files contain information about the history of activity on a database.
LogMiner Benefits
All changes made to user data or to the database dictionary are
recorded in the Oracle redo log files so that database recovery
operations can be performed.
Because LogMiner provides a well-defined, easy-to-use, and
comprehensive relational interface to redo log files, it can be used
as a powerful data audit tool, as well as a tool for sophisticated
data analysis. The following list describes some key capabilities of
LogMiner:
Pinpointing when a logical corruption to a database, such as errors
made at the application level, may have begun. These might include
errors such as those where the wrong rows were deleted because of
incorrect values in a WHERE clause, rows were updated with incorrect
values, the wrong index was dropped, and so forth. For example, a user
application could mistakenly update a database to give all employees
100 percent salary increases rather than 10 percent increases, or a
database administrator (DBA) could accidently delete a critical system
table. It is important to know exactly when an error was made so that
you know when to initiate time-based or change-based recovery. This
enables you to restore the database to the state it was in just before
corruption. See Querying V$LOGMNR_CONTENTS Based on Column Values
for details about how you can use LogMiner to accomplish this.
Determining what actions you would have to take to perform
fine-grained recovery at the transaction level. If you fully
understand and take into account existing dependencies, it may be
possible to perform a table-specific undo operation to return the
table to its original state. This is achieved by applying
table-specific reconstructed SQL statements that LogMiner provides in
the reverse order from which they were originally issued. See
Scenario 1: Using LogMiner to Track Changes Made by a Specific
User for an example.
Normally you would have to restore the table to its previous state,
and then apply an archived redo log file to roll it forward.
Performance tuning and capacity planning through trend analysis. You
can determine which tables get the most updates and inserts. That
information provides a historical perspective on disk access
statistics, which can be used for tuning purposes. See Scenario 2:
Using LogMiner to Calculate Table Access Statistics for an
example.
Performing postauditing. LogMiner can be used to track any data
manipulation language (DML) and data definition language (DDL)
statements executed on the database, the order in which they were
executed, and who executed them. (However, to use LogMiner for such a
purpose, you need to have an idea when the event occurred so that you
can specify the appropriate logs for analysis; otherwise you might
have to mine a large number of redo log files, which can take a long
time. Consider using LogMiner as a complementary activity to auditing
database use. See the Oracle Database Administrator's Guide for
information about database auditing.)
Enjoy.

Sybase ASE 15.0.2 - dynamically update statistics/ index statistics

I am trying to update statistics to some of our tables whose names I receive as input to my procedure. But, I couldn't compile the procedure with the below code.
update index statistics #tableName
Aren't dynamic table names allowed? Or, would the below statement work?
select #statsCmd = 'update index statistics '+#tableName
exec(#statsCmd)
Also, what are the notable differences between "update statistics" and "update index statistics"?
It does appear that update statistics does not allow dynamic table names, but the second statement should work without issue.
Regarding update statistics & update index statistics:
Update statistics can be run against tables without indexes, and other non-index objects, as well as against indexes. If run against an index, it actually executes an update index statistics behind the scenes. Update index statistics only updates statistics for the indices on the specified table.
Also, have you looked into using the Job Scheduler, and the datachange function to automate your update statistics?

Resources