create plsql trigger for any DML operations performed in any tables - oracle

I have around 500 tables in DB. If there is any DML operations performed on that table then trigger should be fired to capture those dml activities and should load it into an audit table. I dont want to write 500 individual triggers. Any simple method to achieve this?

To switch all high level auditing of DML statements for all tables:
AUDIT INSERT TABLE, UPDATE TABLE, DELETE TABLE;
What objects we can manage depends on what privileges we have. Find out more.
AUDIT will write basic information to the audit trail. The destination depends on the value of the AUDIT_TRAIL parameter. If the parameter is set to db the output is written to a database table: we can see our trail in USER_AUDIT_TRAIL or (if we have the privilege) everything in DBA_AUDIT_TRAIL.
The audit trail is high level, which means it records that user FOX updated the EMP table but doesn't tell us which records or what the actual changes were. We can implement granular auditing by creating Fine-Grained Audit policies. This requires a lot more work on our part so we may decide not to enable it for all our tables. Find out more.

Triggers are used on tables only, not the entire database. Ignoring the complexity of maintaining disparate data types, data use, context of various tables and their use, what you are looking for would be extremely complex, something no RDBMS has addressed at the database level.
There is some information on triggers at this link:
https://docs.oracle.com/cd/A57673_01/DOC/server/doc/SCN73/ch15.htm
You could place a trigger on each table that calls the same procedure ... but then all that complexity comes into play.

Related

Retain privileges when dropping objects in Oracle

It occurred to me that I have a fundamental issue with respect to privileges.. Anyone who is granted access to my data warehouse, will be given privileges to objects in the reporting schema. However, whenever we drop objects, those privileges are lost.
The fundamental requirements that should be met with the approach are:
Indexes not populated during load of data (dropped, disabled?) to avoid populating while inserting
Retain existing privileges.
What do you guys think is the best approach based on the requirements above?
For requirement 1: depending on the version of Oracle you're running, you may be able to alter the indexes as invisible. Making indexes invisible will cause the optimizer to ignore them, but it can come in handy because you can simply make them visible again after whatever operation you're performing. If that won't work, you could alter them unusable instead. More info here: https://oracle-base.com/articles/11g/invisible-indexes-11gr1
For requirement 2: Once an object is dropped, the privileges are dropped along with it. There's not really any straightforward way to retain the grants as they are when an object is dropped, however, you could use a number of different methods to "save" the privileges when a table is dropped. These are just some ideas to get you going, not a guaranteed method of success.
Method 1: Using Triggers and DBMS_SCHEDULER to issue the grants. Triggers can be very powerful, and if you create a trigger that is set to run when a table of a specific name is created under a specific schema, you can use DBMS_SCHEDULER to run a job that will issue the missing grants.
Method 2: Per Littlefoot's suggestion, you can save the grant statements in a SQL script and run it manually every time the table is created (or create a trigger for it!)
Method 3: Work with the business and implement a process wherein the table does not need to be dropped, and instead is altered to fit business needs. To use this method, you'll have to understand why the object is being dropped in the first place. Is a drop really necessary to accomplish the desired outcome? I've seen teams request that tables be dropped when they really just wanted the tables to be truncated. If this is one of those scenarios, truncating instead of dropping will let you keep the object and its grants intact.
In any scenario, you'll also want to make sure that you are managing permissions via roles whenever possible, rather than issuing grants to individual users/schemas. Utilizing roles will make managing permissions a lot easier in just about any scenario.
If you DROP an object, the grants are gone. However:
Indexes not populated during load of data (dropped, disabled?) to avoid
populating while inserting
Retain existing privileges.
Here is one common approach. There are others. If you have partitioning there are better ways.
ALTER INDEX my_index1 UNUSABLE;
ALTER INDEX my_index2 UNUSABLE;
...
ALTER INDEX my_indexn UNUSABLE;
TRUNCATE TABLE my_table_with_n_indexes; -- OPTIONAL (depends if you need to start empty)
INSERT /*+ APPEND */ INTO my_table_with_n_indexes; -- Do your load here. APPEND hint optional, depending on what you are doing
ALTER INDEX my_index1 REBUILD;
ALTER INDEX my_index2 REBUILD;
...
ALTER INDEX my_indexn REBUILD;

Dynamic Audit Trigger

I want to keep logs of all tables into 1 single log table. Suppose if any DML operation is going on any table inside DB. Than that should be logged in 1 single tables.
But there should be a dynamic trigger which will not hard coded the column names for every table.
Is there any solution for this.
Regards,
Somdutt Harihar
"Is there any solution for this"
No. This is not how databases work. Strongly enforced data structures is what they do, and that applies to audit tables just as much as transaction tables.
The reason is quite clear: the time you save not writing audit code specific to each transactional table is the time you will spend writing a query to retrieve the audit records. The difference is, when you're trying to get the audit records out you will have your boss standing over your shoulder demanding to know when you can tell them what happened to the payroll records last month. Or asking how long it will take you to produce that report for the regulators, are you trying to make the company look like a bunch of clowns? You get the picture. This is not where you want to be.
Also, the performance of a single table to store all the changes to all the tables in the database? That is going to be so slow, you have no idea.
The point is, we can generate the auditing code. It is easy to write some SQL which interrogates the data dictionary and produces DDL for the target tables and triggers to populate those tables.
In fact it gets even easier in 11.2.0.4 and later because we can use FLASHBACK DATA ARCHIVE (formerly Oracle Total Recall) to build and maintain such journalling functionality automatically, and query it automatically with the as of syntax. Find out more.
Okay, so technically there is a solution. You could have a trigger on each table which executes some dynamic PL/SQL to interrogate the data dictionary and assembles a piece of JSON which you stuff into your single table. The single table could be partitioned by day range and sub-partitioned by table name (assuming you have licensed the Partitioning option) to mitigate the performance of querying it.
But that is extremely complex. Running dynamic PL/SQL for every DML statement will have a bad effect on performance, which the users will notice. And this still doesn't solve the fundamental problem of retrieving the audit trail when you need it.
To audit DML actions on any table just enable such audit by using following code:
audit insert table, update table, delete table;
All actions with tables will then be logged to sys.dba_audit_object table.
Audit will only log timestamp, user, host and other params, not exact copies of new or old rows.

Verify an Oracle database rollback action is successful

How can I verify an Oracle database rollback action is successful? Can I use Number of rows in activity log and Number of rows in event log?
V$TRANSACTION does not contain historical information but it does contain information about all active transactions. In practice this is often enough to quickly and easily monitor rollbacks and estimate when they will complete.
Specifically the columns USED_UBLK and USED_UREC contain the number of UNDO blocks and records remaining. USED_UREC is not always the same as the number of rows; sometimes the number is higher because it includes index entries and sometimes the number is lower because it groups inserts together.
During a long rollback those numbers will decrease until they hit 0. No rows in the table imply that the transactions successfully committed or rolled back. Below is a simple example.
create table table1(a number);
create index table1_idx on table1(a);
insert into table1 values(1);
insert into table1 values(1);
insert into table1 values(1);
select used_ublk, used_urec, ses_addr from v$transaction;
USED_UBLK USED_UREC SES_ADDR
--------- --------- --------
1 6 000007FF1C5A8EA0
Oracle LogMiner, which is part of Oracle Database, enables you to query online and archived redo log files through a SQL interface. Redo log files contain information about the history of activity on a database.
LogMiner Benefits
All changes made to user data or to the database dictionary are
recorded in the Oracle redo log files so that database recovery
operations can be performed.
Because LogMiner provides a well-defined, easy-to-use, and
comprehensive relational interface to redo log files, it can be used
as a powerful data audit tool, as well as a tool for sophisticated
data analysis. The following list describes some key capabilities of
LogMiner:
Pinpointing when a logical corruption to a database, such as errors
made at the application level, may have begun. These might include
errors such as those where the wrong rows were deleted because of
incorrect values in a WHERE clause, rows were updated with incorrect
values, the wrong index was dropped, and so forth. For example, a user
application could mistakenly update a database to give all employees
100 percent salary increases rather than 10 percent increases, or a
database administrator (DBA) could accidently delete a critical system
table. It is important to know exactly when an error was made so that
you know when to initiate time-based or change-based recovery. This
enables you to restore the database to the state it was in just before
corruption. See Querying V$LOGMNR_CONTENTS Based on Column Values
for details about how you can use LogMiner to accomplish this.
Determining what actions you would have to take to perform
fine-grained recovery at the transaction level. If you fully
understand and take into account existing dependencies, it may be
possible to perform a table-specific undo operation to return the
table to its original state. This is achieved by applying
table-specific reconstructed SQL statements that LogMiner provides in
the reverse order from which they were originally issued. See
Scenario 1: Using LogMiner to Track Changes Made by a Specific
User for an example.
Normally you would have to restore the table to its previous state,
and then apply an archived redo log file to roll it forward.
Performance tuning and capacity planning through trend analysis. You
can determine which tables get the most updates and inserts. That
information provides a historical perspective on disk access
statistics, which can be used for tuning purposes. See Scenario 2:
Using LogMiner to Calculate Table Access Statistics for an
example.
Performing postauditing. LogMiner can be used to track any data
manipulation language (DML) and data definition language (DDL)
statements executed on the database, the order in which they were
executed, and who executed them. (However, to use LogMiner for such a
purpose, you need to have an idea when the event occurred so that you
can specify the appropriate logs for analysis; otherwise you might
have to mine a large number of redo log files, which can take a long
time. Consider using LogMiner as a complementary activity to auditing
database use. See the Oracle Database Administrator's Guide for
information about database auditing.)
Enjoy.

Can I detect the version of a table's DDL in Oracle?

In Informix, I can do a select from the systables table, and can investigate its version column to see what numeric version a given table has. This column is incremented with every DDL statement that affects the given table. This means I have the ability to see whether a table's structure has changed since the last time I connected.
Is there a similar way to do this in Oracle?
Not really. The Oracle DBA/ALL/USER_OBJECTS view has a LAST_DDL_TIME column, but it is affected by operations other than structure changes.
You can do that (and more) with a DDL trigger that keeps track of changes to tables. There's an interesting article with example here.
If you really want to do so, you'd have to use Oracle's auditing functions to audit the changes. It could be as simple as:
AUDIT ALTER TABLE WHENEVER SUCCESSFUL on [schema I care about];
That would at least capture the successfuly changes, ignoring drops and creates. Unfortunately, unwinding the stack of the table's historical strucuture by mining the audit trail is left as an exercise to the reader in Oracle, or to licensing the Change Management Pack.
You could also roll your own auditing by writing system-event triggers which are invoked on DDL statements. You'd end up having to write your own SQL parser if you really wantedto see what was changing.

Monitoring which statement updates (and when) a certain table row with Oracle 10

I'm using (have to) a badly designed Oracle(10) DB, for which I don't have admin rights (although I can create tables, triggers, etc in my scheme).
Now I had run into a problem: this DB connected with several users/programs. I must find out who updates a certain row, when, and if possible: with what kind of statement. Is it possible?
Thanks in advance!
It would be easier to do this if you had admin rights to enable auditing. Without the power of auditing you are left with the use of triggers to handle the logging of inserts/updates/delete. In your case since you are interested in only update, you can put a trigger on the table to fire after the update which logs to another table what was changed, by whom, from where and to what and on what day.
I would create a journal table for the table you are working with. It will show you the operation type and the oracle user...as well as a bunch of other data if you need it.

Resources