I’ve been tasked with doing some housekeeping on an Oracle schema I have access to. In a nutshell, I’d like to drop any tables that have not been ‘used’ in the last 3 months (tables that haven’t been queried or had data manipulated in the last 3 months). I have read/write access to the schema but I’m not a DBA; I run relatively basic DML/DDL queries in Oracle.
I’m trying to figure out if there’s a way for me to identify old/redundant tables; here’s what I’ve tried (mostly unsuccessfully)
USER_TABLES was my first port of call, but the LAST_ANALYZED date in this table doesn’t seem to be the last modified/queried date I’m looking for
Googling has brought DBA_Hist tables to my attention, I’ve tried querying some of these (i.e. DBA_HIST_SYSSTAT) but I’m confronted with (ORA-00942: table or view does not exist)
I’ve also tried querying V$SESSION_WAIT, V$ACTIVE_SESSION_HISTORY and V$SEGMENT_STATISTICS, but I get the same ORA-00942 error
I’d be grateful for any advice about whether the options above actually offer the sort of information I need about tables, and if so what I can do to work around the errors I’m getting. Alternatively, are there any other options that I could explore?
Probably the easiest thing to do, to be 100% sure, is to enable auditing on the Oracle tables that you're interested in (possibly all of them). Once enabled, Oracle has an audit table (dba_audit_trail) that you can query to find if the table(s) have been accessed. You can enable auditing by issuing: AUDIT on . BY SESSION;
I chose "by session" so that you only get a single record per session, no matter how many times the session performs the operation (to minimize the records in the audit table).
Example:
audit select on bob.inventory by session;
Then you can query the dba_audit_trail after some time passes to see if any records show up for that table.
You can disable auditing by issuing the "noaudit" command.
Hope that helps.
-Jim
Related
I want to keep logs of all tables into 1 single log table. Suppose if any DML operation is going on any table inside DB. Than that should be logged in 1 single tables.
But there should be a dynamic trigger which will not hard coded the column names for every table.
Is there any solution for this.
Regards,
Somdutt Harihar
"Is there any solution for this"
No. This is not how databases work. Strongly enforced data structures is what they do, and that applies to audit tables just as much as transaction tables.
The reason is quite clear: the time you save not writing audit code specific to each transactional table is the time you will spend writing a query to retrieve the audit records. The difference is, when you're trying to get the audit records out you will have your boss standing over your shoulder demanding to know when you can tell them what happened to the payroll records last month. Or asking how long it will take you to produce that report for the regulators, are you trying to make the company look like a bunch of clowns? You get the picture. This is not where you want to be.
Also, the performance of a single table to store all the changes to all the tables in the database? That is going to be so slow, you have no idea.
The point is, we can generate the auditing code. It is easy to write some SQL which interrogates the data dictionary and produces DDL for the target tables and triggers to populate those tables.
In fact it gets even easier in 11.2.0.4 and later because we can use FLASHBACK DATA ARCHIVE (formerly Oracle Total Recall) to build and maintain such journalling functionality automatically, and query it automatically with the as of syntax. Find out more.
Okay, so technically there is a solution. You could have a trigger on each table which executes some dynamic PL/SQL to interrogate the data dictionary and assembles a piece of JSON which you stuff into your single table. The single table could be partitioned by day range and sub-partitioned by table name (assuming you have licensed the Partitioning option) to mitigate the performance of querying it.
But that is extremely complex. Running dynamic PL/SQL for every DML statement will have a bad effect on performance, which the users will notice. And this still doesn't solve the fundamental problem of retrieving the audit trail when you need it.
To audit DML actions on any table just enable such audit by using following code:
audit insert table, update table, delete table;
All actions with tables will then be logged to sys.dba_audit_object table.
Audit will only log timestamp, user, host and other params, not exact copies of new or old rows.
I have around 500 tables in DB. If there is any DML operations performed on that table then trigger should be fired to capture those dml activities and should load it into an audit table. I dont want to write 500 individual triggers. Any simple method to achieve this?
To switch all high level auditing of DML statements for all tables:
AUDIT INSERT TABLE, UPDATE TABLE, DELETE TABLE;
What objects we can manage depends on what privileges we have. Find out more.
AUDIT will write basic information to the audit trail. The destination depends on the value of the AUDIT_TRAIL parameter. If the parameter is set to db the output is written to a database table: we can see our trail in USER_AUDIT_TRAIL or (if we have the privilege) everything in DBA_AUDIT_TRAIL.
The audit trail is high level, which means it records that user FOX updated the EMP table but doesn't tell us which records or what the actual changes were. We can implement granular auditing by creating Fine-Grained Audit policies. This requires a lot more work on our part so we may decide not to enable it for all our tables. Find out more.
Triggers are used on tables only, not the entire database. Ignoring the complexity of maintaining disparate data types, data use, context of various tables and their use, what you are looking for would be extremely complex, something no RDBMS has addressed at the database level.
There is some information on triggers at this link:
https://docs.oracle.com/cd/A57673_01/DOC/server/doc/SCN73/ch15.htm
You could place a trigger on each table that calls the same procedure ... but then all that complexity comes into play.
I need to do a task but I have no idea how to do it.
Here is the problem:
I have about 1000 tables on a Oracle Database and many processes.
Each process does one or more SELECT on one or many tables.
Because it's almost impossible to look in the source code to find which process does which SELECT on which tables, I would like to have some kind of trigger on SELECT on every table.
The idea is that I will launch the processes one by one to be able to see which tables will query.
I know that there is no trigger on SELECT, but is there anything else?
I need to do this in a one shot, just to recover the necessary info, it will not run every day.
You could activate auditing. You can audit all SELECT with:
AUDIT SELECT TABLE;
You can specify BY SESSION so that only one record will be written to the audit trail per table accessed per session.
Your AUDIT_TRAIL parameter must be set to either DB or OS. If it is set to DB, the audit trail will be written to the SYS.AUD$ table.
Assuming that you can map a "process" in your terminology to a particular Oracle session, you could trace the Oracle session. That would show you all the SQL statements executed by that session.
You could also potentially do a SQL*Net trace from whatever the client machine is (note that the "client machine" in a three-tier environment is the application server). A SQL*Net trace tends not to be nearly as easy to work with, however.
We have some Materialized views in our Oracle 9i database that were created a long time ago, by a guy no longer working here. Is there an easy (or any) method to determine whether Oracle is using these views to serve queries? If they aren't being used any more, we'd like to get rid of them. But we don't want to discover after the fact that those views are the things that allow some random report to run in less than a few hours. The answer I'm dreaming of would be something like
SELECT last_used_date FROM dba_magic
WHERE materialized_view_name = 'peters_mview'
Even more awesome would be something that could tell me what actual SQL queries were using the materialized view. I realize I may have to settle for less.
If there is a solution that requires 10g, we are upgrading soon, so those answers would be useful also.
Oracle auditing can tell you this once configured as per the docs. Once configured, enable it by "AUDIT SELECT ON {name of materialized view}". The audit trail will be in the AUD$ table in the SYS schema.
One method other than auditing would be to read the v$segment_statistics view after one refresh and before the next refresh to see if there have been any reads. You'd have to account for any automatic statistics collection jobs also.
V$SQLAREA table has two columns which help identify the queries executed by the database.
SQL_TEXT - VARCHAR2(1000) - First thousand characters of the SQL text for the current cursor
SQL_FULLTEXT - CLOB - All characters of the SQL text for the current cursor
We can use this columns to find the queries using the said materialized views
I'm using (have to) a badly designed Oracle(10) DB, for which I don't have admin rights (although I can create tables, triggers, etc in my scheme).
Now I had run into a problem: this DB connected with several users/programs. I must find out who updates a certain row, when, and if possible: with what kind of statement. Is it possible?
Thanks in advance!
It would be easier to do this if you had admin rights to enable auditing. Without the power of auditing you are left with the use of triggers to handle the logging of inserts/updates/delete. In your case since you are interested in only update, you can put a trigger on the table to fire after the update which logs to another table what was changed, by whom, from where and to what and on what day.
I would create a journal table for the table you are working with. It will show you the operation type and the oracle user...as well as a bunch of other data if you need it.