Oracle DB Audit ALL - oracle

I have a 5TB large database.
I want to audit everything , really everything.
First of all, I tried with AUDIT ALL, but according to Oracle's document AUDIT ALL does NOT audit everything...
I know that this statement must be executed in order to start auditing db:
alter system set audit_trail=db,extended scope=spfile;
But what else should I do to start auditing all the SQL statements that users execute?

You need not to use the AUDIT feature if you only want a view on user queries (SELECT, UPDATE, INSERT) on your database. $AUD contains rather DDL statements, whereas V$SQL contains only DML statements.
A very simple solution is to use another view: V$SQL. You can extract duration and lot of useful info from it.
Useful example:
SELECT * FROM v$sql vv
WHERE lower(vv.SQL_TEXT) like '%delete%'
AND vv.PARSING_SCHEMA_NAME like 'A_SCHEMA%'
ORDER BY vv.LAST_ACTIVE_TIME desc;
V$SQL lists statistics on shared SQL area without the GROUP BY clause and contains one row for each child of the original SQL text entered. Statistics displayed in V$SQL are normally updated at the end of query execution.
Long running queries are updated every 5 seconds. This shows the impact of long running SQLs while they are still working.

Related

Dynamic Audit Trigger

I want to keep logs of all tables into 1 single log table. Suppose if any DML operation is going on any table inside DB. Than that should be logged in 1 single tables.
But there should be a dynamic trigger which will not hard coded the column names for every table.
Is there any solution for this.
Regards,
Somdutt Harihar
"Is there any solution for this"
No. This is not how databases work. Strongly enforced data structures is what they do, and that applies to audit tables just as much as transaction tables.
The reason is quite clear: the time you save not writing audit code specific to each transactional table is the time you will spend writing a query to retrieve the audit records. The difference is, when you're trying to get the audit records out you will have your boss standing over your shoulder demanding to know when you can tell them what happened to the payroll records last month. Or asking how long it will take you to produce that report for the regulators, are you trying to make the company look like a bunch of clowns? You get the picture. This is not where you want to be.
Also, the performance of a single table to store all the changes to all the tables in the database? That is going to be so slow, you have no idea.
The point is, we can generate the auditing code. It is easy to write some SQL which interrogates the data dictionary and produces DDL for the target tables and triggers to populate those tables.
In fact it gets even easier in 11.2.0.4 and later because we can use FLASHBACK DATA ARCHIVE (formerly Oracle Total Recall) to build and maintain such journalling functionality automatically, and query it automatically with the as of syntax. Find out more.
Okay, so technically there is a solution. You could have a trigger on each table which executes some dynamic PL/SQL to interrogate the data dictionary and assembles a piece of JSON which you stuff into your single table. The single table could be partitioned by day range and sub-partitioned by table name (assuming you have licensed the Partitioning option) to mitigate the performance of querying it.
But that is extremely complex. Running dynamic PL/SQL for every DML statement will have a bad effect on performance, which the users will notice. And this still doesn't solve the fundamental problem of retrieving the audit trail when you need it.
To audit DML actions on any table just enable such audit by using following code:
audit insert table, update table, delete table;
All actions with tables will then be logged to sys.dba_audit_object table.
Audit will only log timestamp, user, host and other params, not exact copies of new or old rows.

Oracle 12c query results in sqlplus but not SQL Developer

I have a single instance called sp_admin with several tables. One of them is called 'tenant_api_user'. From sqlplus, I can select * from any of the tables, including tenant_api_user (or select count(*) etc) and the results are fine. However, when I run queries against this one specific table in SQL Developer, it always returns zero records.
When I run the select * from tenant_api_user; from SQL Developer, it returns 0 records instantly - no waiting. That same query from sqlplus returns the only record in the table. When I try to update a record that is visible from sqlplus, SQL Developer says 0 rows updated.
Has anyone experienced this kind of behavior before? It seems to be isolated to just this table. SQL Developer doesn't behave this way with any other table.
If you have entered the data in one session and you have not run a commit statement then you will not be able to see the data in another session until it has been committed.

How to check whether a delete has been occured in a table at specified time

Recently, a very strange scenario has been reported from one of of our sites.
Based on our fields, we found that there should be some kind of delete that must have happenend for that scenario
In our application code, there is no delete for that table itself. So we checked in gv$sqlarea(since we use RAC) table whether there are any delete sql for this table. We found nothing.
Then we tried to do the same kind of delete through our PL/SQL Developer. We are able to track all delete through gv$sqlarea or gv$session. But when we use below query, lock, edit and commit in plsql developer, there is no trace
select t.*, t.rowid
from <table>
Something which we are able to find is sys.mon_mods$ has the count of deletes. But it is not stored for a long time, so that we can trace by timestamp
Can anyone help me out to track this down
Oracle Version: 11.1.0.7.0
Type : RAC (5 instances)
gv$sqlarea just shows the SQL statements that are in the shared pool. If the statement is only executed once, depending on how large the shared pool and how many distinct SQL statements are executed, a statement might not be in the shared pool very long. I certainly wouldn't expect that a one-time statement would still be in the shared pool of a reasonably active system after a couple hours.
Assuming that you didn't enable auditing and that you don't have triggers that record deletes, is the system in ARCHIVELOG mode? Do you have the archived logs from the point in time where the row was deleted? If so, you could potentially use LogMiner to look through the archived logs to find the statement in question.

Oracle trace all SELECTS

I need to do a task but I have no idea how to do it.
Here is the problem:
I have about 1000 tables on a Oracle Database and many processes.
Each process does one or more SELECT on one or many tables.
Because it's almost impossible to look in the source code to find which process does which SELECT on which tables, I would like to have some kind of trigger on SELECT on every table.
The idea is that I will launch the processes one by one to be able to see which tables will query.
I know that there is no trigger on SELECT, but is there anything else?
I need to do this in a one shot, just to recover the necessary info, it will not run every day.
You could activate auditing. You can audit all SELECT with:
AUDIT SELECT TABLE;
You can specify BY SESSION so that only one record will be written to the audit trail per table accessed per session.
Your AUDIT_TRAIL parameter must be set to either DB or OS. If it is set to DB, the audit trail will be written to the SYS.AUD$ table.
Assuming that you can map a "process" in your terminology to a particular Oracle session, you could trace the Oracle session. That would show you all the SQL statements executed by that session.
You could also potentially do a SQL*Net trace from whatever the client machine is (note that the "client machine" in a three-tier environment is the application server). A SQL*Net trace tends not to be nearly as easy to work with, however.

How can I tell if a Materialized View in Oracle is being used?

We have some Materialized views in our Oracle 9i database that were created a long time ago, by a guy no longer working here. Is there an easy (or any) method to determine whether Oracle is using these views to serve queries? If they aren't being used any more, we'd like to get rid of them. But we don't want to discover after the fact that those views are the things that allow some random report to run in less than a few hours. The answer I'm dreaming of would be something like
SELECT last_used_date FROM dba_magic
WHERE materialized_view_name = 'peters_mview'
Even more awesome would be something that could tell me what actual SQL queries were using the materialized view. I realize I may have to settle for less.
If there is a solution that requires 10g, we are upgrading soon, so those answers would be useful also.
Oracle auditing can tell you this once configured as per the docs. Once configured, enable it by "AUDIT SELECT ON {name of materialized view}". The audit trail will be in the AUD$ table in the SYS schema.
One method other than auditing would be to read the v$segment_statistics view after one refresh and before the next refresh to see if there have been any reads. You'd have to account for any automatic statistics collection jobs also.
V$SQLAREA table has two columns which help identify the queries executed by the database.
SQL_TEXT - VARCHAR2(1000) - First thousand characters of the SQL text for the current cursor
SQL_FULLTEXT - CLOB - All characters of the SQL text for the current cursor
We can use this columns to find the queries using the said materialized views

Resources