How DBMS_STATS.GATHER_TABLE_STATS works in oracle - oracle

I need clarification on how DBMS_STATS.GATHER_TABLE_STATS works.
I have a scenario,were i am creating table and the same table is used in the package creation also.In other words the package compilation is depended on table creation.
Is it mandatory to include DBMS_STATS.GATHER_TABLE_STATS after index creation command in table creation script.
In which situation DBMS_STATS.GATHER_TABLE_STATS works,whether it's for package compilation or package execution.Please confirm.

DBMS_STATS provides information to the Oracle optimizer, enabling Oracle to build efficient execution plans for SQL statements. Optimizer statistics are never necessary for compiling objects.
Gathering statistics is such a complicated subject it can be difficult to even ask the right questions at first. I'd recommend starting by reading the manual, such as Managing Optimizer Statistics: Basic Topics.
Typically your statistics gathering strategy should look something like this:
Gather statistics manually whenever an object's data significantly changes.
Use the default autotask to automatically gather statistics for objects that have been slowly changing or were accidentally forgotten.
To answer some specific questions, you probably do not want to gather statistics right after creating a table. Only gather statistics after changing the data. Gathering statistics on an empty table can be dangerous; Oracle will think the table has 0 rows until the next time statistics are gathered. It would be better to have no statistics at all, then Oracle can use dynamic sampling to estimate the statistics.
(But on the other hand, if the table is created as a SELECT statement and contains a lot of data you may want to gather statistics. Then again, if you're using 12c it might gather statistics automatically when creating a table.)
You also generally do not need to gather statistics after creating an index. Oracle automatically gathers index statistics when an index is created or rebuilt.

Related

Dynamic Audit Trigger

I want to keep logs of all tables into 1 single log table. Suppose if any DML operation is going on any table inside DB. Than that should be logged in 1 single tables.
But there should be a dynamic trigger which will not hard coded the column names for every table.
Is there any solution for this.
Regards,
Somdutt Harihar
"Is there any solution for this"
No. This is not how databases work. Strongly enforced data structures is what they do, and that applies to audit tables just as much as transaction tables.
The reason is quite clear: the time you save not writing audit code specific to each transactional table is the time you will spend writing a query to retrieve the audit records. The difference is, when you're trying to get the audit records out you will have your boss standing over your shoulder demanding to know when you can tell them what happened to the payroll records last month. Or asking how long it will take you to produce that report for the regulators, are you trying to make the company look like a bunch of clowns? You get the picture. This is not where you want to be.
Also, the performance of a single table to store all the changes to all the tables in the database? That is going to be so slow, you have no idea.
The point is, we can generate the auditing code. It is easy to write some SQL which interrogates the data dictionary and produces DDL for the target tables and triggers to populate those tables.
In fact it gets even easier in 11.2.0.4 and later because we can use FLASHBACK DATA ARCHIVE (formerly Oracle Total Recall) to build and maintain such journalling functionality automatically, and query it automatically with the as of syntax. Find out more.
Okay, so technically there is a solution. You could have a trigger on each table which executes some dynamic PL/SQL to interrogate the data dictionary and assembles a piece of JSON which you stuff into your single table. The single table could be partitioned by day range and sub-partitioned by table name (assuming you have licensed the Partitioning option) to mitigate the performance of querying it.
But that is extremely complex. Running dynamic PL/SQL for every DML statement will have a bad effect on performance, which the users will notice. And this still doesn't solve the fundamental problem of retrieving the audit trail when you need it.
To audit DML actions on any table just enable such audit by using following code:
audit insert table, update table, delete table;
All actions with tables will then be logged to sys.dba_audit_object table.
Audit will only log timestamp, user, host and other params, not exact copies of new or old rows.

what is equivalent of Postgresql to dbms_stats.gather_table_stats in Oracle?

I have some question about Postgres, I have used dbms_stats.gather_table_stats for performance optimization in Oracle. I would like to switch our database from Oracle to Postgres, therefore, I want to achieve same feature on Postgres also. I searched internet whether there is some equivalent feature existing in Postgres with dbms_stats.gather_table_stats in Oracle. The only I found was EXPLAIN, VACUUM something like that. I think these are already existing in Oracle with same name. but I can't find proper ones for dbms_stats.gather_table_stats. I am spedning a lot time on it, if you guys have some advice, could I get some?
The GATHER_TABLE_STATS procedure of DBMS_STATS package collects statistics of the specified table in Oracle.
In Postgres, we use ANALYZE for the same purpose.
ANALYZE collects statistics about the contents of tables in the database, and stores the results in the pg_statistic system catalog. Subsequently, the query planner uses these statistics to help determine the most efficient execution plans for queries.

Verify an Oracle database rollback action is successful

How can I verify an Oracle database rollback action is successful? Can I use Number of rows in activity log and Number of rows in event log?
V$TRANSACTION does not contain historical information but it does contain information about all active transactions. In practice this is often enough to quickly and easily monitor rollbacks and estimate when they will complete.
Specifically the columns USED_UBLK and USED_UREC contain the number of UNDO blocks and records remaining. USED_UREC is not always the same as the number of rows; sometimes the number is higher because it includes index entries and sometimes the number is lower because it groups inserts together.
During a long rollback those numbers will decrease until they hit 0. No rows in the table imply that the transactions successfully committed or rolled back. Below is a simple example.
create table table1(a number);
create index table1_idx on table1(a);
insert into table1 values(1);
insert into table1 values(1);
insert into table1 values(1);
select used_ublk, used_urec, ses_addr from v$transaction;
USED_UBLK USED_UREC SES_ADDR
--------- --------- --------
1 6 000007FF1C5A8EA0
Oracle LogMiner, which is part of Oracle Database, enables you to query online and archived redo log files through a SQL interface. Redo log files contain information about the history of activity on a database.
LogMiner Benefits
All changes made to user data or to the database dictionary are
recorded in the Oracle redo log files so that database recovery
operations can be performed.
Because LogMiner provides a well-defined, easy-to-use, and
comprehensive relational interface to redo log files, it can be used
as a powerful data audit tool, as well as a tool for sophisticated
data analysis. The following list describes some key capabilities of
LogMiner:
Pinpointing when a logical corruption to a database, such as errors
made at the application level, may have begun. These might include
errors such as those where the wrong rows were deleted because of
incorrect values in a WHERE clause, rows were updated with incorrect
values, the wrong index was dropped, and so forth. For example, a user
application could mistakenly update a database to give all employees
100 percent salary increases rather than 10 percent increases, or a
database administrator (DBA) could accidently delete a critical system
table. It is important to know exactly when an error was made so that
you know when to initiate time-based or change-based recovery. This
enables you to restore the database to the state it was in just before
corruption. See Querying V$LOGMNR_CONTENTS Based on Column Values
for details about how you can use LogMiner to accomplish this.
Determining what actions you would have to take to perform
fine-grained recovery at the transaction level. If you fully
understand and take into account existing dependencies, it may be
possible to perform a table-specific undo operation to return the
table to its original state. This is achieved by applying
table-specific reconstructed SQL statements that LogMiner provides in
the reverse order from which they were originally issued. See
Scenario 1: Using LogMiner to Track Changes Made by a Specific
User for an example.
Normally you would have to restore the table to its previous state,
and then apply an archived redo log file to roll it forward.
Performance tuning and capacity planning through trend analysis. You
can determine which tables get the most updates and inserts. That
information provides a historical perspective on disk access
statistics, which can be used for tuning purposes. See Scenario 2:
Using LogMiner to Calculate Table Access Statistics for an
example.
Performing postauditing. LogMiner can be used to track any data
manipulation language (DML) and data definition language (DDL)
statements executed on the database, the order in which they were
executed, and who executed them. (However, to use LogMiner for such a
purpose, you need to have an idea when the event occurred so that you
can specify the appropriate logs for analysis; otherwise you might
have to mine a large number of redo log files, which can take a long
time. Consider using LogMiner as a complementary activity to auditing
database use. See the Oracle Database Administrator's Guide for
information about database auditing.)
Enjoy.

How to invalidate a SQL statement in the Oracle SQL area so that a new plan is produced when collecting statistics

I have a table and a query (within a PL/SQL packge) accessing that table. Statistics are collected weekly normally.
A large update has been run on the table, resulting in significantly different data distribution on a particular indexed column. The query plan used by Oracle (which I can see from v$sqlarea) is sub-optimal. If I take an explain plan on the same* query from SQL*Plus, a good plan is returned.
I have since collected statistics on the table. Oracle is still using the query plan that it originally came up with. v$sqlarea.last_load_time suggests this was a plan generated prior to the statistics generation. I thought regenerating statistics would have invalidated plans in the SQL cache.
Is there any way to remove just this statement from the SQL cache?
(* Not character-for-character, matches-in-the-SQL-cache same, but the same statement).
If you are using 10.2.0.4 or later, you should be able to use the DBMS_SHARED_POOL package to purge a single cursor from the shared pool.
I found out (when researching something else) that what I should have done was to use
no_invalidate => FALSE
When collecting the statistics by calling gather_table_stats. This would have caused all SQL plans referencing the table to immediately be invalidated.
The Oracle docs say:
Does not invalidate the dependent cursors if set to TRUE. The procedure
invalidates the dependent cursors immediately if set to FALSE. Use
DBMS_STATS.AUTO_INVALIDATE. to have Oracle decide when to invalidate dependent
cursors. This is the default.
The default of AUTO_INVALIDATE seems to cause invalidation of SQL statements within the next 5 hours. This is to stop massive number of hard-parses if you are collecting statistics on lots of objects.

Can I detect the version of a table's DDL in Oracle?

In Informix, I can do a select from the systables table, and can investigate its version column to see what numeric version a given table has. This column is incremented with every DDL statement that affects the given table. This means I have the ability to see whether a table's structure has changed since the last time I connected.
Is there a similar way to do this in Oracle?
Not really. The Oracle DBA/ALL/USER_OBJECTS view has a LAST_DDL_TIME column, but it is affected by operations other than structure changes.
You can do that (and more) with a DDL trigger that keeps track of changes to tables. There's an interesting article with example here.
If you really want to do so, you'd have to use Oracle's auditing functions to audit the changes. It could be as simple as:
AUDIT ALTER TABLE WHENEVER SUCCESSFUL on [schema I care about];
That would at least capture the successfuly changes, ignoring drops and creates. Unfortunately, unwinding the stack of the table's historical strucuture by mining the audit trail is left as an exercise to the reader in Oracle, or to licensing the Change Management Pack.
You could also roll your own auditing by writing system-event triggers which are invoked on DDL statements. You'd end up having to write your own SQL parser if you really wantedto see what was changing.

Resources